Test Report: KVM_Linux_crio 17885

                    
                      b721bab7b488b5e07b471be256ee12ce84535d3b:2024-01-03:32546
                    
                

Test fail (28/300)

Order failed test Duration
35 TestAddons/parallel/Ingress 155.07
49 TestAddons/StoppedEnableDisable 154.92
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 172.44
213 TestMultiNode/serial/PingHostFrom2Pods 3.15
220 TestMultiNode/serial/RestartKeepsNodes 687.12
222 TestMultiNode/serial/StopMultiNode 143.32
229 TestPreload 277.95
235 TestRunningBinaryUpgrade 164.43
261 TestStoppedBinaryUpgrade/Upgrade 299.78
273 TestPause/serial/SecondStartNoReconfiguration 99.99
328 TestStartStop/group/old-k8s-version/serial/Stop 139.65
331 TestStartStop/group/embed-certs/serial/Stop 140
335 TestStartStop/group/no-preload/serial/Stop 140.27
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.74
338 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
340 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
342 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
346 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.19
347 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.17
348 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.12
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.12
350 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 523.07
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 449.68
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 346.28
353 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 432.63
358 TestStartStop/group/newest-cni/serial/Stop 140.33
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.42
x
+
TestAddons/parallel/Ingress (155.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-848866 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-848866 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-848866 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [490ea700-5fcd-4561-baf8-e43b2d4aafd3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [490ea700-5fcd-4561-baf8-e43b2d4aafd3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004576686s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-848866 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.822304486s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-848866 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.253
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-848866 addons disable ingress-dns --alsologtostderr -v=1: (1.214975768s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-848866 addons disable ingress --alsologtostderr -v=1: (7.81477472s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-848866 -n addons-848866
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-848866 logs -n 25: (1.295536856s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-302470 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |                     |
	|         | -p download-only-302470                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 03 Jan 24 18:58 UTC | 03 Jan 24 18:58 UTC |
	| delete  | -p download-only-302470                                                                     | download-only-302470 | jenkins | v1.32.0 | 03 Jan 24 18:58 UTC | 03 Jan 24 18:58 UTC |
	| delete  | -p download-only-302470                                                                     | download-only-302470 | jenkins | v1.32.0 | 03 Jan 24 18:58 UTC | 03 Jan 24 18:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-440543 | jenkins | v1.32.0 | 03 Jan 24 18:58 UTC |                     |
	|         | binary-mirror-440543                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33485                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-440543                                                                     | binary-mirror-440543 | jenkins | v1.32.0 | 03 Jan 24 18:58 UTC | 03 Jan 24 18:58 UTC |
	| addons  | disable dashboard -p                                                                        | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 18:58 UTC |                     |
	|         | addons-848866                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 18:58 UTC |                     |
	|         | addons-848866                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-848866 --wait=true                                                                | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 18:58 UTC | 03 Jan 24 19:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-848866 addons                                                                        | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | addons-848866                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-848866 ssh cat                                                                       | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | /opt/local-path-provisioner/pvc-63737751-111f-49e1-b285-e8695e5515cf_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-848866 addons disable                                                                | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-848866 ip                                                                            | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	| addons  | addons-848866 addons disable                                                                | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-848866 ssh curl -s                                                                   | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | -p addons-848866                                                                            |                      |         |         |                     |                     |
	| addons  | addons-848866 addons disable                                                                | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | -p addons-848866                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | addons-848866                                                                               |                      |         |         |                     |                     |
	| addons  | addons-848866 addons                                                                        | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-848866 addons                                                                        | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-848866 ip                                                                            | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:03 UTC | 03 Jan 24 19:03 UTC |
	| addons  | addons-848866 addons disable                                                                | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:03 UTC | 03 Jan 24 19:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-848866 addons disable                                                                | addons-848866        | jenkins | v1.32.0 | 03 Jan 24 19:03 UTC | 03 Jan 24 19:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 18:58:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 18:58:11.492821   17285 out.go:296] Setting OutFile to fd 1 ...
	I0103 18:58:11.492949   17285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:58:11.492957   17285 out.go:309] Setting ErrFile to fd 2...
	I0103 18:58:11.492962   17285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:58:11.493154   17285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 18:58:11.493781   17285 out.go:303] Setting JSON to false
	I0103 18:58:11.494610   17285 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2439,"bootTime":1704305853,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 18:58:11.494674   17285 start.go:138] virtualization: kvm guest
	I0103 18:58:11.496997   17285 out.go:177] * [addons-848866] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 18:58:11.498716   17285 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 18:58:11.500184   17285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 18:58:11.498749   17285 notify.go:220] Checking for updates...
	I0103 18:58:11.503336   17285 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 18:58:11.504710   17285 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 18:58:11.506077   17285 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 18:58:11.507475   17285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 18:58:11.509080   17285 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 18:58:11.540139   17285 out.go:177] * Using the kvm2 driver based on user configuration
	I0103 18:58:11.541614   17285 start.go:298] selected driver: kvm2
	I0103 18:58:11.541642   17285 start.go:902] validating driver "kvm2" against <nil>
	I0103 18:58:11.541652   17285 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 18:58:11.542361   17285 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 18:58:11.542513   17285 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 18:58:11.557158   17285 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 18:58:11.557263   17285 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 18:58:11.557612   17285 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 18:58:11.557686   17285 cni.go:84] Creating CNI manager for ""
	I0103 18:58:11.557701   17285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 18:58:11.557718   17285 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0103 18:58:11.557742   17285 start_flags.go:323] config:
	{Name:addons-848866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-848866 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:58:11.557953   17285 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 18:58:11.560448   17285 out.go:177] * Starting control plane node addons-848866 in cluster addons-848866
	I0103 18:58:11.561792   17285 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:58:11.561832   17285 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 18:58:11.561841   17285 cache.go:56] Caching tarball of preloaded images
	I0103 18:58:11.561914   17285 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 18:58:11.561924   17285 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 18:58:11.562224   17285 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/config.json ...
	I0103 18:58:11.562244   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/config.json: {Name:mk8fd566efa72acdac4a4986942b3840377253c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:11.562363   17285 start.go:365] acquiring machines lock for addons-848866: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 18:58:11.562404   17285 start.go:369] acquired machines lock for "addons-848866" in 28.603µs
	I0103 18:58:11.562424   17285 start.go:93] Provisioning new machine with config: &{Name:addons-848866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-848866 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 18:58:11.562475   17285 start.go:125] createHost starting for "" (driver="kvm2")
	I0103 18:58:11.564904   17285 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0103 18:58:11.565054   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:58:11.565100   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:58:11.578857   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0103 18:58:11.579268   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:58:11.579768   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:58:11.579792   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:58:11.580086   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:58:11.580247   17285 main.go:141] libmachine: (addons-848866) Calling .GetMachineName
	I0103 18:58:11.580408   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:58:11.580529   17285 start.go:159] libmachine.API.Create for "addons-848866" (driver="kvm2")
	I0103 18:58:11.580558   17285 client.go:168] LocalClient.Create starting
	I0103 18:58:11.580604   17285 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem
	I0103 18:58:11.795676   17285 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem
	I0103 18:58:11.983843   17285 main.go:141] libmachine: Running pre-create checks...
	I0103 18:58:11.983866   17285 main.go:141] libmachine: (addons-848866) Calling .PreCreateCheck
	I0103 18:58:11.984352   17285 main.go:141] libmachine: (addons-848866) Calling .GetConfigRaw
	I0103 18:58:11.984759   17285 main.go:141] libmachine: Creating machine...
	I0103 18:58:11.984774   17285 main.go:141] libmachine: (addons-848866) Calling .Create
	I0103 18:58:11.984898   17285 main.go:141] libmachine: (addons-848866) Creating KVM machine...
	I0103 18:58:11.986149   17285 main.go:141] libmachine: (addons-848866) DBG | found existing default KVM network
	I0103 18:58:11.987011   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:11.986852   17307 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I0103 18:58:12.083678   17285 main.go:141] libmachine: (addons-848866) DBG | trying to create private KVM network mk-addons-848866 192.168.39.0/24...
	I0103 18:58:12.155510   17285 main.go:141] libmachine: (addons-848866) DBG | private KVM network mk-addons-848866 192.168.39.0/24 created
	I0103 18:58:12.155552   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:12.155479   17307 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 18:58:12.155568   17285 main.go:141] libmachine: (addons-848866) Setting up store path in /home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866 ...
	I0103 18:58:12.155589   17285 main.go:141] libmachine: (addons-848866) Building disk image from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 18:58:12.155613   17285 main.go:141] libmachine: (addons-848866) Downloading /home/jenkins/minikube-integration/17885-9609/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0103 18:58:12.388769   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:12.388640   17307 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa...
	I0103 18:58:12.583112   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:12.582953   17307 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/addons-848866.rawdisk...
	I0103 18:58:12.583153   17285 main.go:141] libmachine: (addons-848866) DBG | Writing magic tar header
	I0103 18:58:12.583169   17285 main.go:141] libmachine: (addons-848866) DBG | Writing SSH key tar header
	I0103 18:58:12.583183   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:12.583107   17307 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866 ...
	I0103 18:58:12.583308   17285 main.go:141] libmachine: (addons-848866) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866 (perms=drwx------)
	I0103 18:58:12.583331   17285 main.go:141] libmachine: (addons-848866) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines (perms=drwxr-xr-x)
	I0103 18:58:12.583341   17285 main.go:141] libmachine: (addons-848866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866
	I0103 18:58:12.583353   17285 main.go:141] libmachine: (addons-848866) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube (perms=drwxr-xr-x)
	I0103 18:58:12.583360   17285 main.go:141] libmachine: (addons-848866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines
	I0103 18:58:12.583369   17285 main.go:141] libmachine: (addons-848866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 18:58:12.583376   17285 main.go:141] libmachine: (addons-848866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609
	I0103 18:58:12.583386   17285 main.go:141] libmachine: (addons-848866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0103 18:58:12.583392   17285 main.go:141] libmachine: (addons-848866) DBG | Checking permissions on dir: /home/jenkins
	I0103 18:58:12.583402   17285 main.go:141] libmachine: (addons-848866) DBG | Checking permissions on dir: /home
	I0103 18:58:12.583408   17285 main.go:141] libmachine: (addons-848866) DBG | Skipping /home - not owner
	I0103 18:58:12.583422   17285 main.go:141] libmachine: (addons-848866) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609 (perms=drwxrwxr-x)
	I0103 18:58:12.583434   17285 main.go:141] libmachine: (addons-848866) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0103 18:58:12.583444   17285 main.go:141] libmachine: (addons-848866) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0103 18:58:12.583451   17285 main.go:141] libmachine: (addons-848866) Creating domain...
	I0103 18:58:12.584341   17285 main.go:141] libmachine: (addons-848866) define libvirt domain using xml: 
	I0103 18:58:12.584363   17285 main.go:141] libmachine: (addons-848866) <domain type='kvm'>
	I0103 18:58:12.584373   17285 main.go:141] libmachine: (addons-848866)   <name>addons-848866</name>
	I0103 18:58:12.584383   17285 main.go:141] libmachine: (addons-848866)   <memory unit='MiB'>4000</memory>
	I0103 18:58:12.584404   17285 main.go:141] libmachine: (addons-848866)   <vcpu>2</vcpu>
	I0103 18:58:12.584426   17285 main.go:141] libmachine: (addons-848866)   <features>
	I0103 18:58:12.584438   17285 main.go:141] libmachine: (addons-848866)     <acpi/>
	I0103 18:58:12.584446   17285 main.go:141] libmachine: (addons-848866)     <apic/>
	I0103 18:58:12.584453   17285 main.go:141] libmachine: (addons-848866)     <pae/>
	I0103 18:58:12.584463   17285 main.go:141] libmachine: (addons-848866)     
	I0103 18:58:12.584494   17285 main.go:141] libmachine: (addons-848866)   </features>
	I0103 18:58:12.584517   17285 main.go:141] libmachine: (addons-848866)   <cpu mode='host-passthrough'>
	I0103 18:58:12.584530   17285 main.go:141] libmachine: (addons-848866)   
	I0103 18:58:12.584542   17285 main.go:141] libmachine: (addons-848866)   </cpu>
	I0103 18:58:12.584556   17285 main.go:141] libmachine: (addons-848866)   <os>
	I0103 18:58:12.584568   17285 main.go:141] libmachine: (addons-848866)     <type>hvm</type>
	I0103 18:58:12.584588   17285 main.go:141] libmachine: (addons-848866)     <boot dev='cdrom'/>
	I0103 18:58:12.584599   17285 main.go:141] libmachine: (addons-848866)     <boot dev='hd'/>
	I0103 18:58:12.584615   17285 main.go:141] libmachine: (addons-848866)     <bootmenu enable='no'/>
	I0103 18:58:12.584627   17285 main.go:141] libmachine: (addons-848866)   </os>
	I0103 18:58:12.584642   17285 main.go:141] libmachine: (addons-848866)   <devices>
	I0103 18:58:12.584657   17285 main.go:141] libmachine: (addons-848866)     <disk type='file' device='cdrom'>
	I0103 18:58:12.584670   17285 main.go:141] libmachine: (addons-848866)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/boot2docker.iso'/>
	I0103 18:58:12.584696   17285 main.go:141] libmachine: (addons-848866)       <target dev='hdc' bus='scsi'/>
	I0103 18:58:12.584711   17285 main.go:141] libmachine: (addons-848866)       <readonly/>
	I0103 18:58:12.584723   17285 main.go:141] libmachine: (addons-848866)     </disk>
	I0103 18:58:12.584736   17285 main.go:141] libmachine: (addons-848866)     <disk type='file' device='disk'>
	I0103 18:58:12.584748   17285 main.go:141] libmachine: (addons-848866)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0103 18:58:12.584764   17285 main.go:141] libmachine: (addons-848866)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/addons-848866.rawdisk'/>
	I0103 18:58:12.584782   17285 main.go:141] libmachine: (addons-848866)       <target dev='hda' bus='virtio'/>
	I0103 18:58:12.584795   17285 main.go:141] libmachine: (addons-848866)     </disk>
	I0103 18:58:12.584808   17285 main.go:141] libmachine: (addons-848866)     <interface type='network'>
	I0103 18:58:12.584822   17285 main.go:141] libmachine: (addons-848866)       <source network='mk-addons-848866'/>
	I0103 18:58:12.584833   17285 main.go:141] libmachine: (addons-848866)       <model type='virtio'/>
	I0103 18:58:12.584843   17285 main.go:141] libmachine: (addons-848866)     </interface>
	I0103 18:58:12.584860   17285 main.go:141] libmachine: (addons-848866)     <interface type='network'>
	I0103 18:58:12.584874   17285 main.go:141] libmachine: (addons-848866)       <source network='default'/>
	I0103 18:58:12.584887   17285 main.go:141] libmachine: (addons-848866)       <model type='virtio'/>
	I0103 18:58:12.584900   17285 main.go:141] libmachine: (addons-848866)     </interface>
	I0103 18:58:12.584912   17285 main.go:141] libmachine: (addons-848866)     <serial type='pty'>
	I0103 18:58:12.584922   17285 main.go:141] libmachine: (addons-848866)       <target port='0'/>
	I0103 18:58:12.584953   17285 main.go:141] libmachine: (addons-848866)     </serial>
	I0103 18:58:12.584968   17285 main.go:141] libmachine: (addons-848866)     <console type='pty'>
	I0103 18:58:12.584980   17285 main.go:141] libmachine: (addons-848866)       <target type='serial' port='0'/>
	I0103 18:58:12.585050   17285 main.go:141] libmachine: (addons-848866)     </console>
	I0103 18:58:12.585077   17285 main.go:141] libmachine: (addons-848866)     <rng model='virtio'>
	I0103 18:58:12.585119   17285 main.go:141] libmachine: (addons-848866)       <backend model='random'>/dev/random</backend>
	I0103 18:58:12.585132   17285 main.go:141] libmachine: (addons-848866)     </rng>
	I0103 18:58:12.585142   17285 main.go:141] libmachine: (addons-848866)     
	I0103 18:58:12.585153   17285 main.go:141] libmachine: (addons-848866)     
	I0103 18:58:12.585167   17285 main.go:141] libmachine: (addons-848866)   </devices>
	I0103 18:58:12.585181   17285 main.go:141] libmachine: (addons-848866) </domain>
	I0103 18:58:12.585200   17285 main.go:141] libmachine: (addons-848866) 
	I0103 18:58:12.667126   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:27:1a:27 in network default
	I0103 18:58:12.667664   17285 main.go:141] libmachine: (addons-848866) Ensuring networks are active...
	I0103 18:58:12.667695   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:12.668456   17285 main.go:141] libmachine: (addons-848866) Ensuring network default is active
	I0103 18:58:12.668702   17285 main.go:141] libmachine: (addons-848866) Ensuring network mk-addons-848866 is active
	I0103 18:58:12.670796   17285 main.go:141] libmachine: (addons-848866) Getting domain xml...
	I0103 18:58:12.671583   17285 main.go:141] libmachine: (addons-848866) Creating domain...
	I0103 18:58:14.231621   17285 main.go:141] libmachine: (addons-848866) Waiting to get IP...
	I0103 18:58:14.232251   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:14.232633   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:14.232656   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:14.232606   17307 retry.go:31] will retry after 227.783673ms: waiting for machine to come up
	I0103 18:58:14.462071   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:14.462582   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:14.462611   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:14.462549   17307 retry.go:31] will retry after 257.986835ms: waiting for machine to come up
	I0103 18:58:14.722029   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:14.722491   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:14.722540   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:14.722451   17307 retry.go:31] will retry after 397.485555ms: waiting for machine to come up
	I0103 18:58:15.122062   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:15.122550   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:15.122582   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:15.122485   17307 retry.go:31] will retry after 495.323728ms: waiting for machine to come up
	I0103 18:58:15.619136   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:15.619484   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:15.619510   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:15.619440   17307 retry.go:31] will retry after 519.187583ms: waiting for machine to come up
	I0103 18:58:16.140022   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:16.140499   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:16.140531   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:16.140438   17307 retry.go:31] will retry after 637.017949ms: waiting for machine to come up
	I0103 18:58:16.779239   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:16.779735   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:16.779765   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:16.779670   17307 retry.go:31] will retry after 1.079212746s: waiting for machine to come up
	I0103 18:58:17.860156   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:17.860543   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:17.860574   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:17.860481   17307 retry.go:31] will retry after 1.107173963s: waiting for machine to come up
	I0103 18:58:18.969715   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:18.970108   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:18.970144   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:18.970062   17307 retry.go:31] will retry after 1.713758558s: waiting for machine to come up
	I0103 18:58:20.685863   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:20.686257   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:20.686283   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:20.686198   17307 retry.go:31] will retry after 2.065604136s: waiting for machine to come up
	I0103 18:58:22.753026   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:22.753481   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:22.753522   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:22.753417   17307 retry.go:31] will retry after 2.889940064s: waiting for machine to come up
	I0103 18:58:25.644744   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:25.645245   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:25.645270   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:25.645201   17307 retry.go:31] will retry after 2.373880067s: waiting for machine to come up
	I0103 18:58:28.021285   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:28.021980   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:28.022023   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:28.021890   17307 retry.go:31] will retry after 2.90711952s: waiting for machine to come up
	I0103 18:58:30.933050   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:30.933409   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find current IP address of domain addons-848866 in network mk-addons-848866
	I0103 18:58:30.933441   17285 main.go:141] libmachine: (addons-848866) DBG | I0103 18:58:30.933374   17307 retry.go:31] will retry after 3.762738588s: waiting for machine to come up
	I0103 18:58:34.700226   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:34.700726   17285 main.go:141] libmachine: (addons-848866) Found IP for machine: 192.168.39.253
	I0103 18:58:34.700744   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has current primary IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:34.700750   17285 main.go:141] libmachine: (addons-848866) Reserving static IP address...
	I0103 18:58:34.701127   17285 main.go:141] libmachine: (addons-848866) DBG | unable to find host DHCP lease matching {name: "addons-848866", mac: "52:54:00:c3:68:28", ip: "192.168.39.253"} in network mk-addons-848866
	I0103 18:58:34.777858   17285 main.go:141] libmachine: (addons-848866) DBG | Getting to WaitForSSH function...
	I0103 18:58:34.777913   17285 main.go:141] libmachine: (addons-848866) Reserved static IP address: 192.168.39.253
	I0103 18:58:34.777929   17285 main.go:141] libmachine: (addons-848866) Waiting for SSH to be available...
	I0103 18:58:34.780997   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:34.781468   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:34.781498   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:34.781605   17285 main.go:141] libmachine: (addons-848866) DBG | Using SSH client type: external
	I0103 18:58:34.781636   17285 main.go:141] libmachine: (addons-848866) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa (-rw-------)
	I0103 18:58:34.781675   17285 main.go:141] libmachine: (addons-848866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 18:58:34.781694   17285 main.go:141] libmachine: (addons-848866) DBG | About to run SSH command:
	I0103 18:58:34.781708   17285 main.go:141] libmachine: (addons-848866) DBG | exit 0
	I0103 18:58:34.874366   17285 main.go:141] libmachine: (addons-848866) DBG | SSH cmd err, output: <nil>: 
	I0103 18:58:34.874642   17285 main.go:141] libmachine: (addons-848866) KVM machine creation complete!
	I0103 18:58:34.875071   17285 main.go:141] libmachine: (addons-848866) Calling .GetConfigRaw
	I0103 18:58:34.875572   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:58:34.875787   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:58:34.875953   17285 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0103 18:58:34.875978   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:58:34.877323   17285 main.go:141] libmachine: Detecting operating system of created instance...
	I0103 18:58:34.877339   17285 main.go:141] libmachine: Waiting for SSH to be available...
	I0103 18:58:34.877346   17285 main.go:141] libmachine: Getting to WaitForSSH function...
	I0103 18:58:34.877356   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:34.879323   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:34.879635   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:34.879663   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:34.879787   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:34.880001   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:34.880167   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:34.880341   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:34.880484   17285 main.go:141] libmachine: Using SSH client type: native
	I0103 18:58:34.880874   17285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0103 18:58:34.880888   17285 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0103 18:58:34.989853   17285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 18:58:34.989882   17285 main.go:141] libmachine: Detecting the provisioner...
	I0103 18:58:34.989894   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:34.992872   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:34.993316   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:34.993340   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:34.993679   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:34.993899   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:34.994083   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:34.994249   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:34.994420   17285 main.go:141] libmachine: Using SSH client type: native
	I0103 18:58:34.994838   17285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0103 18:58:34.994858   17285 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0103 18:58:35.103128   17285 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0103 18:58:35.103231   17285 main.go:141] libmachine: found compatible host: buildroot
	I0103 18:58:35.103252   17285 main.go:141] libmachine: Provisioning with buildroot...
	I0103 18:58:35.103265   17285 main.go:141] libmachine: (addons-848866) Calling .GetMachineName
	I0103 18:58:35.103541   17285 buildroot.go:166] provisioning hostname "addons-848866"
	I0103 18:58:35.103562   17285 main.go:141] libmachine: (addons-848866) Calling .GetMachineName
	I0103 18:58:35.103732   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:35.106478   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.106894   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.106921   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.107057   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:35.107258   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.107407   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.107604   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:35.107805   17285 main.go:141] libmachine: Using SSH client type: native
	I0103 18:58:35.108163   17285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0103 18:58:35.108178   17285 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-848866 && echo "addons-848866" | sudo tee /etc/hostname
	I0103 18:58:35.226714   17285 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-848866
	
	I0103 18:58:35.226778   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:35.229557   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.229836   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.229860   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.230103   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:35.230292   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.230493   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.230682   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:35.230874   17285 main.go:141] libmachine: Using SSH client type: native
	I0103 18:58:35.231231   17285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0103 18:58:35.231258   17285 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-848866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-848866/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-848866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 18:58:35.346395   17285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 18:58:35.346422   17285 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 18:58:35.346457   17285 buildroot.go:174] setting up certificates
	I0103 18:58:35.346470   17285 provision.go:83] configureAuth start
	I0103 18:58:35.346486   17285 main.go:141] libmachine: (addons-848866) Calling .GetMachineName
	I0103 18:58:35.346874   17285 main.go:141] libmachine: (addons-848866) Calling .GetIP
	I0103 18:58:35.349576   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.349924   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.349951   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.350109   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:35.352647   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.353031   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.353064   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.353330   17285 provision.go:138] copyHostCerts
	I0103 18:58:35.353402   17285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 18:58:35.353540   17285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 18:58:35.353617   17285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 18:58:35.353667   17285 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.addons-848866 san=[192.168.39.253 192.168.39.253 localhost 127.0.0.1 minikube addons-848866]
	I0103 18:58:35.415171   17285 provision.go:172] copyRemoteCerts
	I0103 18:58:35.415240   17285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 18:58:35.415263   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:35.418105   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.418605   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.418638   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.418841   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:35.419066   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.419256   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:35.419423   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:58:35.503765   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 18:58:35.525278   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0103 18:58:35.547203   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 18:58:35.569295   17285 provision.go:86] duration metric: configureAuth took 222.811422ms
	I0103 18:58:35.569332   17285 buildroot.go:189] setting minikube options for container-runtime
	I0103 18:58:35.569543   17285 config.go:182] Loaded profile config "addons-848866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 18:58:35.569625   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:35.572544   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.572881   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.572921   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.573121   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:35.573365   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.573516   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.573652   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:35.573820   17285 main.go:141] libmachine: Using SSH client type: native
	I0103 18:58:35.574134   17285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0103 18:58:35.574153   17285 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 18:58:35.874510   17285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 18:58:35.874561   17285 main.go:141] libmachine: Checking connection to Docker...
	I0103 18:58:35.874589   17285 main.go:141] libmachine: (addons-848866) Calling .GetURL
	I0103 18:58:35.875861   17285 main.go:141] libmachine: (addons-848866) DBG | Using libvirt version 6000000
	I0103 18:58:35.878358   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.878764   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.878794   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.878973   17285 main.go:141] libmachine: Docker is up and running!
	I0103 18:58:35.878992   17285 main.go:141] libmachine: Reticulating splines...
	I0103 18:58:35.879000   17285 client.go:171] LocalClient.Create took 24.298432248s
	I0103 18:58:35.879026   17285 start.go:167] duration metric: libmachine.API.Create for "addons-848866" took 24.298497001s
	I0103 18:58:35.879046   17285 start.go:300] post-start starting for "addons-848866" (driver="kvm2")
	I0103 18:58:35.879061   17285 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 18:58:35.879081   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:58:35.879311   17285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 18:58:35.879329   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:35.881380   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.881700   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.881733   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.881898   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:35.882066   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.882240   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:35.882397   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:58:35.964423   17285 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 18:58:35.968271   17285 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 18:58:35.968294   17285 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 18:58:35.968367   17285 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 18:58:35.968390   17285 start.go:303] post-start completed in 89.335863ms
	I0103 18:58:35.968420   17285 main.go:141] libmachine: (addons-848866) Calling .GetConfigRaw
	I0103 18:58:35.968993   17285 main.go:141] libmachine: (addons-848866) Calling .GetIP
	I0103 18:58:35.972014   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.972634   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.972673   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.973088   17285 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/config.json ...
	I0103 18:58:35.973595   17285 start.go:128] duration metric: createHost completed in 24.411102388s
	I0103 18:58:35.973637   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:35.976302   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.976659   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:35.976700   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:35.976827   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:35.977029   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.977217   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:35.977384   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:35.977578   17285 main.go:141] libmachine: Using SSH client type: native
	I0103 18:58:35.977955   17285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0103 18:58:35.977968   17285 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 18:58:36.087238   17285 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704308316.058731398
	
	I0103 18:58:36.087268   17285 fix.go:206] guest clock: 1704308316.058731398
	I0103 18:58:36.087276   17285 fix.go:219] Guest: 2024-01-03 18:58:36.058731398 +0000 UTC Remote: 2024-01-03 18:58:35.973616743 +0000 UTC m=+24.527167862 (delta=85.114655ms)
	I0103 18:58:36.087295   17285 fix.go:190] guest clock delta is within tolerance: 85.114655ms
	I0103 18:58:36.087301   17285 start.go:83] releasing machines lock for "addons-848866", held for 24.524887606s
	I0103 18:58:36.087326   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:58:36.087610   17285 main.go:141] libmachine: (addons-848866) Calling .GetIP
	I0103 18:58:36.090140   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:36.090603   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:36.090643   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:36.090719   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:58:36.091309   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:58:36.091513   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:58:36.091637   17285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 18:58:36.091677   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:36.091802   17285 ssh_runner.go:195] Run: cat /version.json
	I0103 18:58:36.091832   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:58:36.094487   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:36.094742   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:36.094770   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:36.094934   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:36.094958   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:36.095122   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:36.095282   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:36.095332   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:36.095389   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:36.095456   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:58:36.095521   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:58:36.095680   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:58:36.095845   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:58:36.095998   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:58:36.215841   17285 ssh_runner.go:195] Run: systemctl --version
	I0103 18:58:36.221449   17285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 18:58:36.384601   17285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 18:58:36.390030   17285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 18:58:36.390096   17285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 18:58:36.404963   17285 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 18:58:36.404993   17285 start.go:475] detecting cgroup driver to use...
	I0103 18:58:36.405065   17285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 18:58:36.418624   17285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 18:58:36.431849   17285 docker.go:203] disabling cri-docker service (if available) ...
	I0103 18:58:36.431939   17285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 18:58:36.445089   17285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 18:58:36.458470   17285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 18:58:36.572083   17285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 18:58:36.695005   17285 docker.go:219] disabling docker service ...
	I0103 18:58:36.695073   17285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 18:58:36.707396   17285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 18:58:36.719354   17285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 18:58:36.825660   17285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 18:58:36.930600   17285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 18:58:36.942503   17285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 18:58:36.959031   17285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 18:58:36.959105   17285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 18:58:36.967939   17285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 18:58:36.968007   17285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 18:58:36.977273   17285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 18:58:36.986083   17285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 18:58:36.995150   17285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 18:58:37.004550   17285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 18:58:37.012764   17285 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 18:58:37.012820   17285 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 18:58:37.025562   17285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 18:58:37.033988   17285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 18:58:37.131568   17285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 18:58:37.303958   17285 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 18:58:37.304039   17285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 18:58:37.309191   17285 start.go:543] Will wait 60s for crictl version
	I0103 18:58:37.309267   17285 ssh_runner.go:195] Run: which crictl
	I0103 18:58:37.312709   17285 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 18:58:37.346169   17285 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 18:58:37.346299   17285 ssh_runner.go:195] Run: crio --version
	I0103 18:58:37.389896   17285 ssh_runner.go:195] Run: crio --version
	I0103 18:58:37.436459   17285 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 18:58:37.438155   17285 main.go:141] libmachine: (addons-848866) Calling .GetIP
	I0103 18:58:37.440871   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:37.441312   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:58:37.441342   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:58:37.441572   17285 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 18:58:37.445535   17285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 18:58:37.457404   17285 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:58:37.457469   17285 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 18:58:37.490409   17285 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 18:58:37.490494   17285 ssh_runner.go:195] Run: which lz4
	I0103 18:58:37.494266   17285 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 18:58:37.498248   17285 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 18:58:37.498285   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 18:58:39.180425   17285 crio.go:444] Took 1.686190 seconds to copy over tarball
	I0103 18:58:39.180500   17285 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 18:58:42.094033   17285 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.913501313s)
	I0103 18:58:42.094060   17285 crio.go:451] Took 2.913609 seconds to extract the tarball
	I0103 18:58:42.094068   17285 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 18:58:42.134236   17285 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 18:58:42.202088   17285 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 18:58:42.202118   17285 cache_images.go:84] Images are preloaded, skipping loading
	I0103 18:58:42.202196   17285 ssh_runner.go:195] Run: crio config
	I0103 18:58:42.266273   17285 cni.go:84] Creating CNI manager for ""
	I0103 18:58:42.266297   17285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 18:58:42.266318   17285 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 18:58:42.266348   17285 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.253 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-848866 NodeName:addons-848866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 18:58:42.266544   17285 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-848866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 18:58:42.266649   17285 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-848866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-848866 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 18:58:42.266718   17285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 18:58:42.275561   17285 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 18:58:42.275651   17285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 18:58:42.284061   17285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0103 18:58:42.299495   17285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 18:58:42.315969   17285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0103 18:58:42.332179   17285 ssh_runner.go:195] Run: grep 192.168.39.253	control-plane.minikube.internal$ /etc/hosts
	I0103 18:58:42.335912   17285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 18:58:42.348598   17285 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866 for IP: 192.168.39.253
	I0103 18:58:42.348630   17285 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.348805   17285 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 18:58:42.447925   17285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt ...
	I0103 18:58:42.447954   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt: {Name:mk23a7bf35e88e173500de708b3e2d3be68f169d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.448099   17285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key ...
	I0103 18:58:42.448109   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key: {Name:mk6ea21022c71e1d45317ddd2f3fa09888ff8c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.448174   17285 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 18:58:42.521938   17285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt ...
	I0103 18:58:42.521971   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt: {Name:mk69f49afe00cefd73c72054e14113982cc7b60b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.522123   17285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key ...
	I0103 18:58:42.522133   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key: {Name:mkb6d02af1b7e2f4bd31ae9d07649df981f98b9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.522218   17285 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.key
	I0103 18:58:42.522230   17285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt with IP's: []
	I0103 18:58:42.617410   17285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt ...
	I0103 18:58:42.617439   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: {Name:mk925613283a0e21b9ab1eb76127fa71897c93b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.617592   17285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.key ...
	I0103 18:58:42.617602   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.key: {Name:mkb9cc0e0be379228fbc7b82364ec5c6c23ec4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.617668   17285 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.key.5d9c10c0
	I0103 18:58:42.617684   17285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.crt.5d9c10c0 with IP's: [192.168.39.253 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 18:58:42.789406   17285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.crt.5d9c10c0 ...
	I0103 18:58:42.789435   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.crt.5d9c10c0: {Name:mk5ee203ccd6974fe42c61267c59865049b96540 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.789573   17285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.key.5d9c10c0 ...
	I0103 18:58:42.789591   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.key.5d9c10c0: {Name:mkf0f3bb3416f2994697336d3e77413d879c9c03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.789657   17285 certs.go:337] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.crt.5d9c10c0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.crt
	I0103 18:58:42.789732   17285 certs.go:341] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.key.5d9c10c0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.key
	I0103 18:58:42.789782   17285 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/proxy-client.key
	I0103 18:58:42.789799   17285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/proxy-client.crt with IP's: []
	I0103 18:58:42.871552   17285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/proxy-client.crt ...
	I0103 18:58:42.871592   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/proxy-client.crt: {Name:mk039e7762d63d2369098935bf6e1b352f07794a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.871744   17285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/proxy-client.key ...
	I0103 18:58:42.871754   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/proxy-client.key: {Name:mk9a1a6f194bf1eb12d4547f2c9bf2fbe8ac5e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:58:42.871904   17285 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 18:58:42.871940   17285 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 18:58:42.871964   17285 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 18:58:42.871988   17285 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 18:58:42.872528   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 18:58:42.896567   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 18:58:42.918269   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 18:58:42.939043   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 18:58:42.960037   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 18:58:42.981557   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 18:58:43.004484   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 18:58:43.025917   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 18:58:43.047752   17285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 18:58:43.068778   17285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 18:58:43.084647   17285 ssh_runner.go:195] Run: openssl version
	I0103 18:58:43.089926   17285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 18:58:43.099643   17285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 18:58:43.104006   17285 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 18:58:43.104068   17285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 18:58:43.109251   17285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 18:58:43.118946   17285 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 18:58:43.122769   17285 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 18:58:43.122832   17285 kubeadm.go:404] StartCluster: {Name:addons-848866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-848866 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:58:43.122904   17285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 18:58:43.122974   17285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 18:58:43.173705   17285 cri.go:89] found id: ""
	I0103 18:58:43.173815   17285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 18:58:43.182919   17285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 18:58:43.191599   17285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 18:58:43.200484   17285 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 18:58:43.200537   17285 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0103 18:58:43.378924   17285 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 18:58:55.858100   17285 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0103 18:58:55.858184   17285 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 18:58:55.858261   17285 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 18:58:55.858363   17285 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 18:58:55.858461   17285 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 18:58:55.858539   17285 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 18:58:55.859997   17285 out.go:204]   - Generating certificates and keys ...
	I0103 18:58:55.860059   17285 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 18:58:55.860112   17285 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 18:58:55.860177   17285 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 18:58:55.860225   17285 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 18:58:55.860274   17285 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 18:58:55.860337   17285 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 18:58:55.860434   17285 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 18:58:55.860601   17285 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-848866 localhost] and IPs [192.168.39.253 127.0.0.1 ::1]
	I0103 18:58:55.860691   17285 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 18:58:55.860824   17285 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-848866 localhost] and IPs [192.168.39.253 127.0.0.1 ::1]
	I0103 18:58:55.860923   17285 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 18:58:55.861025   17285 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 18:58:55.861086   17285 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 18:58:55.861167   17285 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 18:58:55.861239   17285 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 18:58:55.861309   17285 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 18:58:55.861368   17285 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 18:58:55.861413   17285 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 18:58:55.861478   17285 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 18:58:55.861536   17285 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 18:58:55.863299   17285 out.go:204]   - Booting up control plane ...
	I0103 18:58:55.863422   17285 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 18:58:55.863523   17285 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 18:58:55.863651   17285 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 18:58:55.863804   17285 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 18:58:55.863942   17285 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 18:58:55.864002   17285 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 18:58:55.864221   17285 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 18:58:55.864327   17285 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004003 seconds
	I0103 18:58:55.864477   17285 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 18:58:55.864641   17285 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 18:58:55.864720   17285 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 18:58:55.864911   17285 kubeadm.go:322] [mark-control-plane] Marking the node addons-848866 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 18:58:55.864964   17285 kubeadm.go:322] [bootstrap-token] Using token: 0kxb7o.6qlydf1xrx783hm5
	I0103 18:58:55.866591   17285 out.go:204]   - Configuring RBAC rules ...
	I0103 18:58:55.866720   17285 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 18:58:55.866829   17285 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 18:58:55.867006   17285 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 18:58:55.867203   17285 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 18:58:55.867353   17285 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 18:58:55.867464   17285 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 18:58:55.867584   17285 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 18:58:55.867638   17285 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 18:58:55.867696   17285 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 18:58:55.867711   17285 kubeadm.go:322] 
	I0103 18:58:55.867798   17285 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 18:58:55.867807   17285 kubeadm.go:322] 
	I0103 18:58:55.867899   17285 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 18:58:55.867916   17285 kubeadm.go:322] 
	I0103 18:58:55.867940   17285 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 18:58:55.868000   17285 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 18:58:55.868043   17285 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 18:58:55.868049   17285 kubeadm.go:322] 
	I0103 18:58:55.868095   17285 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0103 18:58:55.868101   17285 kubeadm.go:322] 
	I0103 18:58:55.868155   17285 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 18:58:55.868162   17285 kubeadm.go:322] 
	I0103 18:58:55.868203   17285 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 18:58:55.868265   17285 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 18:58:55.868368   17285 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 18:58:55.868385   17285 kubeadm.go:322] 
	I0103 18:58:55.868474   17285 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 18:58:55.868543   17285 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 18:58:55.868549   17285 kubeadm.go:322] 
	I0103 18:58:55.868627   17285 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0kxb7o.6qlydf1xrx783hm5 \
	I0103 18:58:55.868748   17285 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 \
	I0103 18:58:55.868785   17285 kubeadm.go:322] 	--control-plane 
	I0103 18:58:55.868798   17285 kubeadm.go:322] 
	I0103 18:58:55.868872   17285 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 18:58:55.868878   17285 kubeadm.go:322] 
	I0103 18:58:55.868976   17285 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0kxb7o.6qlydf1xrx783hm5 \
	I0103 18:58:55.869076   17285 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 
	I0103 18:58:55.869103   17285 cni.go:84] Creating CNI manager for ""
	I0103 18:58:55.869112   17285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 18:58:55.870999   17285 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 18:58:55.872579   17285 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 18:58:55.933337   17285 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 18:58:56.015179   17285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 18:58:56.015319   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=addons-848866 minikube.k8s.io/updated_at=2024_01_03T18_58_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:58:56.015323   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:58:56.033717   17285 ops.go:34] apiserver oom_adj: -16
	I0103 18:58:56.214264   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:58:56.714299   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:58:57.215103   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:58:57.714563   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:58:58.214426   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:58:58.715129   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:58:59.214279   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:58:59.714937   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:00.214542   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:00.714638   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:01.214353   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:01.715060   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:02.214553   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:02.715206   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:03.214892   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:03.715001   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:04.214656   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:04.714617   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:05.214648   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:05.715121   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:06.215151   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:06.715089   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:07.215097   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:07.714647   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:08.214243   17285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:08.331717   17285 kubeadm.go:1088] duration metric: took 12.316452481s to wait for elevateKubeSystemPrivileges.
	I0103 18:59:08.331762   17285 kubeadm.go:406] StartCluster complete in 25.208935458s
	I0103 18:59:08.331784   17285 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:08.331949   17285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 18:59:08.332305   17285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:08.332502   17285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 18:59:08.332548   17285 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0103 18:59:08.332657   17285 addons.go:69] Setting ingress=true in profile "addons-848866"
	I0103 18:59:08.332675   17285 addons.go:69] Setting yakd=true in profile "addons-848866"
	I0103 18:59:08.332681   17285 addons.go:69] Setting metrics-server=true in profile "addons-848866"
	I0103 18:59:08.332696   17285 addons.go:237] Setting addon ingress=true in "addons-848866"
	I0103 18:59:08.332701   17285 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-848866"
	I0103 18:59:08.332706   17285 addons.go:237] Setting addon metrics-server=true in "addons-848866"
	I0103 18:59:08.332713   17285 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-848866"
	I0103 18:59:08.332726   17285 addons.go:69] Setting storage-provisioner=true in profile "addons-848866"
	I0103 18:59:08.332738   17285 addons.go:69] Setting gcp-auth=true in profile "addons-848866"
	I0103 18:59:08.332752   17285 config.go:182] Loaded profile config "addons-848866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 18:59:08.332756   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.332764   17285 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-848866"
	I0103 18:59:08.332768   17285 mustload.go:65] Loading cluster: addons-848866
	I0103 18:59:08.332775   17285 addons.go:69] Setting inspektor-gadget=true in profile "addons-848866"
	I0103 18:59:08.332776   17285 addons.go:69] Setting registry=true in profile "addons-848866"
	I0103 18:59:08.332786   17285 addons.go:237] Setting addon inspektor-gadget=true in "addons-848866"
	I0103 18:59:08.332756   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.332787   17285 addons.go:69] Setting volumesnapshots=true in profile "addons-848866"
	I0103 18:59:08.332798   17285 addons.go:69] Setting helm-tiller=true in profile "addons-848866"
	I0103 18:59:08.332800   17285 addons.go:237] Setting addon volumesnapshots=true in "addons-848866"
	I0103 18:59:08.332809   17285 addons.go:237] Setting addon helm-tiller=true in "addons-848866"
	I0103 18:59:08.332814   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.332831   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.332852   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.332930   17285 config.go:182] Loaded profile config "addons-848866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 18:59:08.333216   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333230   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333231   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333233   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333251   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.333257   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333269   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.332693   17285 addons.go:237] Setting addon yakd=true in "addons-848866"
	I0103 18:59:08.333301   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.332778   17285 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-848866"
	I0103 18:59:08.332755   17285 addons.go:237] Setting addon storage-provisioner=true in "addons-848866"
	I0103 18:59:08.333271   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.332789   17285 addons.go:237] Setting addon registry=true in "addons-848866"
	I0103 18:59:08.333230   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333274   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.333355   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.332756   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.333376   17285 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-848866"
	I0103 18:59:08.333314   17285 addons.go:69] Setting cloud-spanner=true in profile "addons-848866"
	I0103 18:59:08.333401   17285 addons.go:237] Setting addon cloud-spanner=true in "addons-848866"
	I0103 18:59:08.332767   17285 addons.go:69] Setting ingress-dns=true in profile "addons-848866"
	I0103 18:59:08.333413   17285 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-848866"
	I0103 18:59:08.333422   17285 addons.go:237] Setting addon ingress-dns=true in "addons-848866"
	I0103 18:59:08.333429   17285 addons.go:69] Setting default-storageclass=true in profile "addons-848866"
	I0103 18:59:08.333469   17285 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-848866"
	I0103 18:59:08.333541   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.333566   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.333698   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333716   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.333733   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333764   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.333800   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.333881   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333913   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.333927   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.333948   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.333953   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.334120   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.334169   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.334210   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.334291   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.334301   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.334327   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.350967   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.351006   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.351070   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.351085   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.351204   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.351224   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.361234   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I0103 18:59:08.361315   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0103 18:59:08.361403   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I0103 18:59:08.361464   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
	I0103 18:59:08.361490   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0103 18:59:08.361785   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.361814   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.361919   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.362014   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.362016   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.362270   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.362274   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.362293   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.362307   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.362441   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.362464   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.362733   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.362776   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.362848   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.362872   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.363085   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.363100   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.363125   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.363267   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.363267   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.363294   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.363313   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.363357   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.363758   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.364437   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.364489   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.364554   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.366987   17285 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-848866"
	I0103 18:59:08.367028   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.367433   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.367466   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.371118   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.371155   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.379234   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0103 18:59:08.379720   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.380203   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.380220   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.380289   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0103 18:59:08.380623   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.382835   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.383062   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.384783   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.384804   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.385534   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.386199   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.386253   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.387208   17285 addons.go:237] Setting addon default-storageclass=true in "addons-848866"
	I0103 18:59:08.387242   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.387628   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.387664   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.395622   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0103 18:59:08.396209   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.396958   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39817
	I0103 18:59:08.398234   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0103 18:59:08.398692   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.398718   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.398835   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.399040   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.399160   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.399485   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.399503   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.399871   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.399912   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.399928   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.400427   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.400474   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.401018   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.401035   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.401914   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.402560   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.402595   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.409160   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I0103 18:59:08.409691   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I0103 18:59:08.409990   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.410380   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0103 18:59:08.410796   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.410815   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.410894   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.410926   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.410972   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40383
	I0103 18:59:08.412844   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0103 18:59:08.412847   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.412927   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.412943   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.413202   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.413302   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.413525   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.413706   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.413743   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.413770   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.413786   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.414166   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.414221   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0103 18:59:08.414825   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.414860   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.415077   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.415471   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.415490   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.415506   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.415569   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.415649   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.415673   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.417625   17285 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 18:59:08.415992   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.416122   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.416177   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.416707   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I0103 18:59:08.419090   17285 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 18:59:08.419103   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 18:59:08.419121   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.419194   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.419542   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.420143   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.420189   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.420572   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.420617   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.420644   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.421637   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.421656   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.421726   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0103 18:59:08.422026   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.422284   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.422836   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.422855   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.422871   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.423207   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.423543   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.423548   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.423554   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.423564   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.423706   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.424125   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.424159   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.424881   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0103 18:59:08.425027   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.425193   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.425264   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.425563   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.426030   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.426044   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.426557   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:08.426630   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.429117   17285 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0103 18:59:08.426907   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.427269   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.430715   17285 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0103 18:59:08.430726   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0103 18:59:08.430743   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.430802   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.431168   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.433580   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.433948   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.433978   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.434107   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.434341   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.434508   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.434678   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.434831   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.437922   17285 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0103 18:59:08.437091   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0103 18:59:08.440089   17285 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0103 18:59:08.440112   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0103 18:59:08.440131   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.440768   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.441324   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.441348   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.441731   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.442320   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:08.442360   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:08.444034   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.444552   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.444581   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.444754   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.444961   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.445034   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0103 18:59:08.445339   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.445518   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.450934   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.451593   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.451628   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.452651   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.452853   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.454679   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40829
	I0103 18:59:08.455127   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.455591   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.455606   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.455937   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.456092   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.457185   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0103 18:59:08.457526   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.458032   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.458052   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.458122   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.461729   17285 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0103 18:59:08.458510   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.459347   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43881
	I0103 18:59:08.461140   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.463218   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0103 18:59:08.463438   17285 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0103 18:59:08.463447   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0103 18:59:08.463463   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.464056   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.464096   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.464165   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.465975   17285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0103 18:59:08.464745   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.465658   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.466015   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.466548   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.468859   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I0103 18:59:08.468886   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.468905   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.469171   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.469194   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.469203   17285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0103 18:59:08.469281   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.469647   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.470341   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.470386   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.471540   17285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0103 18:59:08.471742   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.471791   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39795
	I0103 18:59:08.471922   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.473119   17285 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0103 18:59:08.474879   17285 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 18:59:08.474901   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 18:59:08.474919   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.476903   17285 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0103 18:59:08.476924   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0103 18:59:08.476941   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.473249   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.473928   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.473948   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0103 18:59:08.473973   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33889
	I0103 18:59:08.473973   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.474058   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.477681   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.474178   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.478736   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.478806   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36241
	I0103 18:59:08.478906   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.479941   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.479452   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.479998   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.479547   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.480232   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41611
	I0103 18:59:08.480848   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.480895   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.480908   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.481038   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.481123   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.481251   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.481262   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.481307   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.481520   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I0103 18:59:08.481668   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.481682   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.481759   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.481847   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.481870   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.482168   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.481974   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.482172   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.482468   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.482488   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.482531   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.482553   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.482390   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.482677   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.482773   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.482828   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.482993   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.483017   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.483199   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.483344   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.484654   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.486632   17285 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0103 18:59:08.485177   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.485208   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.485562   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.485761   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.485934   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.486051   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.486116   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:08.486324   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.486363   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.487977   17285 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0103 18:59:08.487994   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0103 18:59:08.488010   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.487979   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.489467   17285 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0103 18:59:08.491031   17285 out.go:177]   - Using image docker.io/busybox:stable
	I0103 18:59:08.488256   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.488800   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:08.490989   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.491722   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.492684   17285 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0103 18:59:08.492857   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.494016   17285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0103 18:59:08.494024   17285 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0103 18:59:08.494048   17285 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0103 18:59:08.494073   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:08.495453   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.495668   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.496530   17285 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0103 18:59:08.496595   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.496742   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.497063   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:08.497777   17285 out.go:177]   - Using image docker.io/registry:2.8.3
	I0103 18:59:08.497971   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.498871   17285 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0103 18:59:08.498938   17285 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0103 18:59:08.498941   17285 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0103 18:59:08.499516   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:08.500412   17285 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0103 18:59:08.500496   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.501727   17285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0103 18:59:08.501744   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0103 18:59:08.501747   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0103 18:59:08.501758   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0103 18:59:08.504562   17285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0103 18:59:08.503122   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.503135   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.503144   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.503155   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0103 18:59:08.503182   17285 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0103 18:59:08.504403   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:08.507875   17285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0103 18:59:08.505744   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.505754   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0103 18:59:08.507963   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.505981   17285 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 18:59:08.508044   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 18:59:08.509472   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.509859   17285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0103 18:59:08.509877   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.510128   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.510166   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.510574   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.510726   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.511088   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.511832   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.511847   17285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0103 18:59:08.511265   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.511878   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.511902   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.512061   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.512067   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.512646   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.513171   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.513794   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.513213   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.513844   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.513738   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.513865   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.513871   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.513772   17285 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0103 18:59:08.515472   17285 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0103 18:59:08.517169   17285 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0103 18:59:08.517183   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0103 18:59:08.517197   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:08.515425   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.513913   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.514033   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.517254   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.514055   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.517280   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.514084   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.514092   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.514304   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.513900   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.515954   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.517442   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.517496   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.517519   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:08.517561   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.517600   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.518162   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.518171   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.518230   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.518317   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.518350   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.518350   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.520381   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.520765   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:08.520784   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:08.520927   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:08.521064   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	W0103 18:59:08.521199   17285 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48402->192.168.39.253:22: read: connection reset by peer
	I0103 18:59:08.521222   17285 retry.go:31] will retry after 125.120753ms: ssh: handshake failed: read tcp 192.168.39.1:48402->192.168.39.253:22: read: connection reset by peer
	I0103 18:59:08.521267   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:08.521427   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:08.791107   17285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 18:59:08.877695   17285 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-848866" context rescaled to 1 replicas
	I0103 18:59:08.877739   17285 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 18:59:08.880795   17285 out.go:177] * Verifying Kubernetes components...
	I0103 18:59:08.882280   17285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 18:59:09.072527   17285 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0103 18:59:09.072559   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0103 18:59:09.077653   17285 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0103 18:59:09.077682   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0103 18:59:09.084204   17285 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0103 18:59:09.084231   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0103 18:59:09.084262   17285 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0103 18:59:09.084283   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0103 18:59:09.091548   17285 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 18:59:09.091574   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0103 18:59:09.113210   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0103 18:59:09.116731   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0103 18:59:09.118419   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0103 18:59:09.119308   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 18:59:09.120633   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 18:59:09.121729   17285 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0103 18:59:09.121746   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0103 18:59:09.122134   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0103 18:59:09.122313   17285 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0103 18:59:09.122333   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0103 18:59:09.123758   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0103 18:59:09.144043   17285 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 18:59:09.144077   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 18:59:09.325423   17285 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0103 18:59:09.325448   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0103 18:59:09.326155   17285 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0103 18:59:09.326176   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0103 18:59:09.344898   17285 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0103 18:59:09.344929   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0103 18:59:09.373713   17285 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0103 18:59:09.373739   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0103 18:59:09.374408   17285 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0103 18:59:09.374427   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0103 18:59:09.381343   17285 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0103 18:59:09.381368   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0103 18:59:09.392057   17285 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 18:59:09.392079   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 18:59:09.447000   17285 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0103 18:59:09.447025   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0103 18:59:09.460578   17285 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0103 18:59:09.460611   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0103 18:59:09.527602   17285 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0103 18:59:09.527627   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0103 18:59:09.570232   17285 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0103 18:59:09.570262   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0103 18:59:09.594588   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 18:59:09.627877   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0103 18:59:09.630767   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0103 18:59:09.641708   17285 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0103 18:59:09.641738   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0103 18:59:09.665926   17285 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0103 18:59:09.665958   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0103 18:59:09.727612   17285 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0103 18:59:09.727643   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0103 18:59:09.732716   17285 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0103 18:59:09.732740   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0103 18:59:09.779933   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0103 18:59:09.792282   17285 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0103 18:59:09.792307   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0103 18:59:09.834595   17285 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0103 18:59:09.834639   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0103 18:59:09.848522   17285 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0103 18:59:09.848545   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0103 18:59:09.897117   17285 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0103 18:59:09.897140   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0103 18:59:09.916987   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0103 18:59:09.927592   17285 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0103 18:59:09.927622   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0103 18:59:09.961917   17285 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0103 18:59:09.961947   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0103 18:59:09.987880   17285 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0103 18:59:09.987901   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0103 18:59:10.052190   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0103 18:59:10.067678   17285 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0103 18:59:10.067700   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0103 18:59:10.130941   17285 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0103 18:59:10.130963   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0103 18:59:10.167227   17285 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0103 18:59:10.167251   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0103 18:59:10.204581   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0103 18:59:12.307629   17285 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.516482615s)
	I0103 18:59:12.307665   17285 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.425359301s)
	I0103 18:59:12.307675   17285 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0103 18:59:12.335396   17285 node_ready.go:35] waiting up to 6m0s for node "addons-848866" to be "Ready" ...
	I0103 18:59:12.807725   17285 node_ready.go:49] node "addons-848866" has status "Ready":"True"
	I0103 18:59:12.807756   17285 node_ready.go:38] duration metric: took 472.326996ms waiting for node "addons-848866" to be "Ready" ...
	I0103 18:59:12.807769   17285 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 18:59:13.193249   17285 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:13.440978   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.32772333s)
	I0103 18:59:13.441040   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:13.441057   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:13.441337   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:13.441364   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:13.441381   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:13.441389   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:13.441631   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:13.441645   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:14.777728   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.66096644s)
	I0103 18:59:14.777769   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:14.777783   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:14.778023   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:14.778041   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:14.778114   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:14.778162   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:14.778111   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:14.778463   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:14.778507   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:15.496393   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:15.934427   17285 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0103 18:59:15.934464   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:15.937290   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:15.937730   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:15.937755   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:15.937931   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:15.938131   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:15.938273   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:15.938404   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:16.096608   17285 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0103 18:59:16.126153   17285 addons.go:237] Setting addon gcp-auth=true in "addons-848866"
	I0103 18:59:16.126208   17285 host.go:66] Checking if "addons-848866" exists ...
	I0103 18:59:16.126545   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:16.126578   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:16.142286   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I0103 18:59:16.142742   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:16.143230   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:16.143254   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:16.143570   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:16.144008   17285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 18:59:16.144033   17285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 18:59:16.159472   17285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I0103 18:59:16.159910   17285 main.go:141] libmachine: () Calling .GetVersion
	I0103 18:59:16.160371   17285 main.go:141] libmachine: Using API Version  1
	I0103 18:59:16.160399   17285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 18:59:16.160748   17285 main.go:141] libmachine: () Calling .GetMachineName
	I0103 18:59:16.160960   17285 main.go:141] libmachine: (addons-848866) Calling .GetState
	I0103 18:59:16.162732   17285 main.go:141] libmachine: (addons-848866) Calling .DriverName
	I0103 18:59:16.162965   17285 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0103 18:59:16.162992   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHHostname
	I0103 18:59:16.165803   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:16.166333   17285 main.go:141] libmachine: (addons-848866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:68:28", ip: ""} in network mk-addons-848866: {Iface:virbr1 ExpiryTime:2024-01-03 19:58:27 +0000 UTC Type:0 Mac:52:54:00:c3:68:28 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:addons-848866 Clientid:01:52:54:00:c3:68:28}
	I0103 18:59:16.166365   17285 main.go:141] libmachine: (addons-848866) DBG | domain addons-848866 has defined IP address 192.168.39.253 and MAC address 52:54:00:c3:68:28 in network mk-addons-848866
	I0103 18:59:16.166511   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHPort
	I0103 18:59:16.166738   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHKeyPath
	I0103 18:59:16.166884   17285 main.go:141] libmachine: (addons-848866) Calling .GetSSHUsername
	I0103 18:59:16.167038   17285 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/addons-848866/id_rsa Username:docker}
	I0103 18:59:17.764899   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:17.803466   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.684129722s)
	I0103 18:59:17.803520   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.803533   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.803557   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.682892446s)
	I0103 18:59:17.803599   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.803614   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.803627   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.681473061s)
	I0103 18:59:17.803657   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.803675   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.679893155s)
	I0103 18:59:17.803710   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.803743   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.209125186s)
	I0103 18:59:17.803750   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.803766   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.803781   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.803839   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.803853   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.803857   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.803862   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.803873   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.803930   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.803951   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.803940   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.175898356s)
	I0103 18:59:17.803995   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.685137771s)
	I0103 18:59:17.804019   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.804028   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.804052   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.804074   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.804085   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.024124352s)
	I0103 18:59:17.804111   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.804117   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.804125   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.804128   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.804140   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.804148   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.804221   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.804242   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.804253   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.804262   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.804270   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.804309   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.804327   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.804335   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.804510   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.887483338s)
	I0103 18:59:17.804540   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	W0103 18:59:17.804543   17285 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0103 18:59:17.804572   17285 retry.go:31] will retry after 282.323903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0103 18:59:17.804638   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.752416163s)
	I0103 18:59:17.804654   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.804664   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.804859   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.804882   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.804890   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.803966   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.805025   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.803955   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.173148138s)
	I0103 18:59:17.805224   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.805243   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.805295   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.805324   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.805342   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.806016   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.806051   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.806060   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.806070   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.806079   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.806133   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.806156   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.806165   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.806174   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.806182   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.806223   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.806236   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.806244   17285 addons.go:473] Verifying addon ingress=true in "addons-848866"
	I0103 18:59:17.809042   17285 out.go:177] * Verifying ingress addon...
	I0103 18:59:17.806493   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.806502   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.806514   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.806605   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.806609   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.806621   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.807336   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.807369   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.807474   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.808169   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.810510   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.810543   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.810563   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.810568   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.810576   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.810580   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.810586   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.810552   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.810558   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.812052   17285 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-848866 service yakd-dashboard -n yakd-dashboard
	
	
	I0103 18:59:17.810634   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.810792   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.810839   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.810843   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.810851   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.810870   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.810872   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.811378   17285 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0103 18:59:17.813276   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.813309   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.813330   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.813333   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.813335   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.813340   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.813346   17285 addons.go:473] Verifying addon registry=true in "addons-848866"
	I0103 18:59:17.814771   17285 out.go:177] * Verifying registry addon...
	I0103 18:59:17.813544   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.813549   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.813710   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.813732   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.814854   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.816135   17285 addons.go:473] Verifying addon metrics-server=true in "addons-848866"
	I0103 18:59:17.814869   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.816949   17285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0103 18:59:17.837883   17285 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0103 18:59:17.837906   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:17.838271   17285 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0103 18:59:17.838292   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:17.851343   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.851362   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:17.851381   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.851368   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:17.851665   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.851694   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:17.851707   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:17.851667   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:17.851722   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	W0103 18:59:17.851774   17285 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0103 18:59:18.087772   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0103 18:59:18.356361   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:18.473257   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:18.724418   17285 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.561428433s)
	I0103 18:59:18.726095   17285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0103 18:59:18.724418   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.519781911s)
	I0103 18:59:18.726142   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:18.727504   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:18.727472   17285 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0103 18:59:18.728945   17285 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0103 18:59:18.728959   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0103 18:59:18.727769   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:18.729011   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:18.729031   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:18.729042   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:18.727798   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:18.729310   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:18.730567   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:18.730582   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:18.730596   17285 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-848866"
	I0103 18:59:18.732208   17285 out.go:177] * Verifying csi-hostpath-driver addon...
	I0103 18:59:18.734387   17285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0103 18:59:18.762584   17285 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0103 18:59:18.762614   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:18.816509   17285 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0103 18:59:18.816532   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0103 18:59:18.832802   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:18.839731   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:18.857744   17285 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0103 18:59:18.857768   17285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0103 18:59:18.934625   17285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0103 18:59:19.243306   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:19.331029   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:19.478200   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:19.750027   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:19.819128   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:19.825693   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:20.208001   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:20.249339   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:20.324626   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:20.343810   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:20.566355   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.478537861s)
	I0103 18:59:20.566399   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:20.566409   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:20.566776   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:20.566839   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:20.566854   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:20.566867   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:20.566879   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:20.567091   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:20.567172   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:20.567190   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:20.764746   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:20.845480   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:20.850309   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:20.900138   17285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.96545671s)
	I0103 18:59:20.900205   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:20.900220   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:20.900517   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:20.900558   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:20.900560   17285 main.go:141] libmachine: (addons-848866) DBG | Closing plugin on server side
	I0103 18:59:20.900578   17285 main.go:141] libmachine: Making call to close driver server
	I0103 18:59:20.900589   17285 main.go:141] libmachine: (addons-848866) Calling .Close
	I0103 18:59:20.900831   17285 main.go:141] libmachine: Successfully made call to close driver server
	I0103 18:59:20.900846   17285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 18:59:20.901768   17285 addons.go:473] Verifying addon gcp-auth=true in "addons-848866"
	I0103 18:59:20.903568   17285 out.go:177] * Verifying gcp-auth addon...
	I0103 18:59:20.905911   17285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0103 18:59:20.931960   17285 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0103 18:59:20.931979   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:21.250789   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:21.318838   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:21.324968   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:21.427079   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:21.751478   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:21.818991   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:21.822631   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:21.910277   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:22.240338   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:22.318116   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:22.324614   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:22.413552   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:22.699346   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:22.740544   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:22.818924   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:22.822760   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:22.909967   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:23.240981   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:23.318252   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:23.321452   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:23.410449   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:23.740487   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:23.821181   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:23.823554   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:23.911325   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:24.242379   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:24.318688   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:24.322083   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:24.411080   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:24.701472   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:24.742973   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:24.818515   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:24.823086   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:24.912369   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:25.244984   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:25.320454   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:25.331391   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:25.410854   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:25.748171   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:25.818305   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:25.822155   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:25.912735   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:26.246797   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:26.318578   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:26.326911   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:26.427832   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:26.711559   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:26.745789   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:26.818397   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:26.821412   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:26.920173   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:27.248775   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:27.321563   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:27.336178   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:27.410509   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:27.750188   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:27.823225   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:27.830669   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:27.910306   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:28.252114   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:28.328386   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:28.328493   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:28.416398   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:29.120783   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:29.122563   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:29.142605   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:29.143188   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:29.202056   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:29.240215   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:29.319541   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:29.322946   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:29.414897   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:29.741696   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:29.818483   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:29.824055   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:29.912570   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:30.240073   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:30.320260   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:30.326100   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:30.413010   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:30.746215   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:30.817533   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:30.821845   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:30.911000   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:31.242619   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:31.325054   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:31.327029   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:31.410557   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:31.700074   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:31.740603   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:31.818423   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:31.822370   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:31.910623   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:32.241274   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:32.318978   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:32.321720   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:32.409895   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:32.746137   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:32.818062   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:32.821600   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:32.909715   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:33.240809   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:33.318418   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:33.322787   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:33.417497   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:33.764763   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:33.767397   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:33.817630   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:33.823909   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:33.910042   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:34.240496   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:34.319226   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:34.322680   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:34.412850   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:34.746101   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:34.819501   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:34.831681   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:34.923398   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:35.244216   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:35.319495   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:35.322136   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:35.411125   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:35.740807   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:35.818909   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:35.824582   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:35.910425   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:36.199386   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:36.241067   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:36.320329   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:36.323236   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:36.409661   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:36.933698   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:36.933948   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:36.934110   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:36.934308   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:37.241779   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:37.320510   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:37.328433   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:37.418071   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:37.739651   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:37.818646   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:37.821924   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:37.910776   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:38.204199   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:38.240393   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:38.318409   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:38.321592   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:38.410094   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:38.740868   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:38.817976   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:38.821638   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:38.909675   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:39.240624   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:39.318229   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:39.321687   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:39.410976   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:39.743710   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:39.818680   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:39.823209   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:39.910511   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:40.240490   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:40.318874   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:40.322502   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:40.410071   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:40.700805   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:40.753125   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:40.822401   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:40.830628   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:40.909839   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:41.244168   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:41.320258   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:41.322387   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:41.410088   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:41.742172   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:41.819095   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:41.822714   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:41.910634   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:42.240275   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:42.318077   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:42.321451   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:42.410154   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:42.740667   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:42.819306   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:42.822050   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:42.910803   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:43.200000   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:43.240355   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:43.318193   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:43.321695   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:43.410188   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:43.741498   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:43.818685   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:43.821476   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:43.910472   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:44.240868   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:44.321010   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:44.323039   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:44.410668   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:44.739848   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:44.819025   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:44.822086   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:44.911027   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:45.201361   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:45.242902   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:45.319610   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:45.321557   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:45.410584   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:45.741826   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:45.818379   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:45.822434   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:45.909572   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:46.241685   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:46.318104   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:46.321767   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:46.412498   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:46.740962   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:46.818889   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:46.822058   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:46.916607   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:47.201633   17285 pod_ready.go:102] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"False"
	I0103 18:59:47.240051   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:47.318300   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:47.322111   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:47.410832   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:47.741291   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:47.821307   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:47.822186   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:47.910615   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:48.277081   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:48.318701   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:48.324172   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:48.410102   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:48.700296   17285 pod_ready.go:92] pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace has status "Ready":"True"
	I0103 18:59:48.700316   17285 pod_ready.go:81] duration metric: took 35.507037046s waiting for pod "coredns-5dd5756b68-65cqq" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.700324   17285 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bt2pb" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.703441   17285 pod_ready.go:97] error getting pod "coredns-5dd5756b68-bt2pb" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-bt2pb" not found
	I0103 18:59:48.703463   17285 pod_ready.go:81] duration metric: took 3.133044ms waiting for pod "coredns-5dd5756b68-bt2pb" in "kube-system" namespace to be "Ready" ...
	E0103 18:59:48.703472   17285 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-bt2pb" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-bt2pb" not found
	I0103 18:59:48.703477   17285 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-848866" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.711111   17285 pod_ready.go:92] pod "etcd-addons-848866" in "kube-system" namespace has status "Ready":"True"
	I0103 18:59:48.711131   17285 pod_ready.go:81] duration metric: took 7.649034ms waiting for pod "etcd-addons-848866" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.711139   17285 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-848866" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.715999   17285 pod_ready.go:92] pod "kube-apiserver-addons-848866" in "kube-system" namespace has status "Ready":"True"
	I0103 18:59:48.716018   17285 pod_ready.go:81] duration metric: took 4.872951ms waiting for pod "kube-apiserver-addons-848866" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.716026   17285 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-848866" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.721121   17285 pod_ready.go:92] pod "kube-controller-manager-addons-848866" in "kube-system" namespace has status "Ready":"True"
	I0103 18:59:48.721138   17285 pod_ready.go:81] duration metric: took 5.106873ms waiting for pod "kube-controller-manager-addons-848866" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.721146   17285 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mn4pd" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.741062   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:48.818324   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:48.827455   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:48.898306   17285 pod_ready.go:92] pod "kube-proxy-mn4pd" in "kube-system" namespace has status "Ready":"True"
	I0103 18:59:48.898328   17285 pod_ready.go:81] duration metric: took 177.176055ms waiting for pod "kube-proxy-mn4pd" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.898337   17285 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-848866" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:48.910313   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:49.241645   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:49.297760   17285 pod_ready.go:92] pod "kube-scheduler-addons-848866" in "kube-system" namespace has status "Ready":"True"
	I0103 18:59:49.297790   17285 pod_ready.go:81] duration metric: took 399.445969ms waiting for pod "kube-scheduler-addons-848866" in "kube-system" namespace to be "Ready" ...
	I0103 18:59:49.297802   17285 pod_ready.go:38] duration metric: took 36.490020791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 18:59:49.297820   17285 api_server.go:52] waiting for apiserver process to appear ...
	I0103 18:59:49.297882   17285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 18:59:49.318988   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:49.324359   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:49.339298   17285 api_server.go:72] duration metric: took 40.461533189s to wait for apiserver process to appear ...
	I0103 18:59:49.339331   17285 api_server.go:88] waiting for apiserver healthz status ...
	I0103 18:59:49.339354   17285 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0103 18:59:49.344345   17285 api_server.go:279] https://192.168.39.253:8443/healthz returned 200:
	ok
	I0103 18:59:49.345415   17285 api_server.go:141] control plane version: v1.28.4
	I0103 18:59:49.345435   17285 api_server.go:131] duration metric: took 6.098078ms to wait for apiserver health ...
	I0103 18:59:49.345446   17285 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 18:59:49.409965   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:49.506412   17285 system_pods.go:59] 18 kube-system pods found
	I0103 18:59:49.506451   17285 system_pods.go:61] "coredns-5dd5756b68-65cqq" [2fb26394-bfc6-4d70-8a66-a3643c421b4a] Running
	I0103 18:59:49.506458   17285 system_pods.go:61] "csi-hostpath-attacher-0" [e1004b45-c943-42c1-91ce-26e2c3896eb4] Running
	I0103 18:59:49.506468   17285 system_pods.go:61] "csi-hostpath-resizer-0" [86461867-9241-4241-b5d5-6c589eef9947] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0103 18:59:49.506477   17285 system_pods.go:61] "csi-hostpathplugin-psmqd" [50a28720-c0f7-427d-94de-20628c3194fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0103 18:59:49.506486   17285 system_pods.go:61] "etcd-addons-848866" [5f4ea756-594f-4fd4-90b2-61aff5a596ba] Running
	I0103 18:59:49.506492   17285 system_pods.go:61] "kube-apiserver-addons-848866" [34d8db6c-f545-439b-8b2f-68e4fff8d14a] Running
	I0103 18:59:49.506500   17285 system_pods.go:61] "kube-controller-manager-addons-848866" [05988294-ffef-4840-80b7-a1e2983ea0b9] Running
	I0103 18:59:49.506508   17285 system_pods.go:61] "kube-ingress-dns-minikube" [7fb40f5c-ea06-451a-bf9d-4ccd66d89336] Running
	I0103 18:59:49.506514   17285 system_pods.go:61] "kube-proxy-mn4pd" [4ec9c5db-0675-4813-a8b7-808b6525239a] Running
	I0103 18:59:49.506537   17285 system_pods.go:61] "kube-scheduler-addons-848866" [e4398c49-4a7f-4293-8b27-c548f64ba4a7] Running
	I0103 18:59:49.506548   17285 system_pods.go:61] "metrics-server-7c66d45ddc-vxk9c" [b3acd530-1430-4c25-9c02-6706eb256850] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 18:59:49.506567   17285 system_pods.go:61] "nvidia-device-plugin-daemonset-r7lx5" [8aa19cd3-113d-4ffc-bc90-bb4545d5700d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0103 18:59:49.506578   17285 system_pods.go:61] "registry-proxy-glv5v" [22a80b4a-fe0d-4fe5-a339-e484f216e167] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0103 18:59:49.506590   17285 system_pods.go:61] "registry-vb8nh" [8239cd82-c41f-448e-b099-83140af6d1b5] Running
	I0103 18:59:49.506604   17285 system_pods.go:61] "snapshot-controller-58dbcc7b99-sxxq5" [abd141eb-1670-4d8f-80da-2a8586d682b3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0103 18:59:49.506619   17285 system_pods.go:61] "snapshot-controller-58dbcc7b99-z5t49" [1ee54c4e-d44d-405c-a6a9-54b1ba9cbc78] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0103 18:59:49.506630   17285 system_pods.go:61] "storage-provisioner" [1deab566-1a42-4d50-a45b-a772cea4cee3] Running
	I0103 18:59:49.506640   17285 system_pods.go:61] "tiller-deploy-7b677967b9-gjh42" [d2fde79f-5c98-4c33-920b-7f58e5b30565] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0103 18:59:49.506653   17285 system_pods.go:74] duration metric: took 161.199074ms to wait for pod list to return data ...
	I0103 18:59:49.506666   17285 default_sa.go:34] waiting for default service account to be created ...
	I0103 18:59:49.697744   17285 default_sa.go:45] found service account: "default"
	I0103 18:59:49.697769   17285 default_sa.go:55] duration metric: took 191.09302ms for default service account to be created ...
	I0103 18:59:49.697778   17285 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 18:59:49.740108   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:49.818362   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:49.821555   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:49.904693   17285 system_pods.go:86] 18 kube-system pods found
	I0103 18:59:49.904719   17285 system_pods.go:89] "coredns-5dd5756b68-65cqq" [2fb26394-bfc6-4d70-8a66-a3643c421b4a] Running
	I0103 18:59:49.904732   17285 system_pods.go:89] "csi-hostpath-attacher-0" [e1004b45-c943-42c1-91ce-26e2c3896eb4] Running
	I0103 18:59:49.904743   17285 system_pods.go:89] "csi-hostpath-resizer-0" [86461867-9241-4241-b5d5-6c589eef9947] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0103 18:59:49.904752   17285 system_pods.go:89] "csi-hostpathplugin-psmqd" [50a28720-c0f7-427d-94de-20628c3194fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0103 18:59:49.904761   17285 system_pods.go:89] "etcd-addons-848866" [5f4ea756-594f-4fd4-90b2-61aff5a596ba] Running
	I0103 18:59:49.904768   17285 system_pods.go:89] "kube-apiserver-addons-848866" [34d8db6c-f545-439b-8b2f-68e4fff8d14a] Running
	I0103 18:59:49.904775   17285 system_pods.go:89] "kube-controller-manager-addons-848866" [05988294-ffef-4840-80b7-a1e2983ea0b9] Running
	I0103 18:59:49.904786   17285 system_pods.go:89] "kube-ingress-dns-minikube" [7fb40f5c-ea06-451a-bf9d-4ccd66d89336] Running
	I0103 18:59:49.904793   17285 system_pods.go:89] "kube-proxy-mn4pd" [4ec9c5db-0675-4813-a8b7-808b6525239a] Running
	I0103 18:59:49.904802   17285 system_pods.go:89] "kube-scheduler-addons-848866" [e4398c49-4a7f-4293-8b27-c548f64ba4a7] Running
	I0103 18:59:49.904810   17285 system_pods.go:89] "metrics-server-7c66d45ddc-vxk9c" [b3acd530-1430-4c25-9c02-6706eb256850] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 18:59:49.904819   17285 system_pods.go:89] "nvidia-device-plugin-daemonset-r7lx5" [8aa19cd3-113d-4ffc-bc90-bb4545d5700d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0103 18:59:49.904827   17285 system_pods.go:89] "registry-proxy-glv5v" [22a80b4a-fe0d-4fe5-a339-e484f216e167] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0103 18:59:49.904834   17285 system_pods.go:89] "registry-vb8nh" [8239cd82-c41f-448e-b099-83140af6d1b5] Running
	I0103 18:59:49.904840   17285 system_pods.go:89] "snapshot-controller-58dbcc7b99-sxxq5" [abd141eb-1670-4d8f-80da-2a8586d682b3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0103 18:59:49.904849   17285 system_pods.go:89] "snapshot-controller-58dbcc7b99-z5t49" [1ee54c4e-d44d-405c-a6a9-54b1ba9cbc78] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0103 18:59:49.904854   17285 system_pods.go:89] "storage-provisioner" [1deab566-1a42-4d50-a45b-a772cea4cee3] Running
	I0103 18:59:49.904867   17285 system_pods.go:89] "tiller-deploy-7b677967b9-gjh42" [d2fde79f-5c98-4c33-920b-7f58e5b30565] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0103 18:59:49.904879   17285 system_pods.go:126] duration metric: took 207.094796ms to wait for k8s-apps to be running ...
	I0103 18:59:49.904892   17285 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 18:59:49.904941   17285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 18:59:49.914417   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:49.958751   17285 system_svc.go:56] duration metric: took 53.847649ms WaitForService to wait for kubelet.
	I0103 18:59:49.958828   17285 kubeadm.go:581] duration metric: took 41.081067906s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 18:59:49.958855   17285 node_conditions.go:102] verifying NodePressure condition ...
	I0103 18:59:50.098494   17285 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 18:59:50.098549   17285 node_conditions.go:123] node cpu capacity is 2
	I0103 18:59:50.098565   17285 node_conditions.go:105] duration metric: took 139.703964ms to run NodePressure ...
	I0103 18:59:50.098597   17285 start.go:228] waiting for startup goroutines ...
	I0103 18:59:50.249291   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:50.319961   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:50.322308   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:50.410326   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:50.741461   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:50.821432   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:50.828297   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:50.917454   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:51.242831   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:51.319027   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:51.323249   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:51.412571   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:51.759669   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:51.830381   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:51.831647   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:51.911323   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:52.240689   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:52.318802   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:52.322341   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:52.412240   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:52.740788   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:52.818614   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:52.821834   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:52.910989   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:53.240476   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:53.318509   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:53.322461   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:53.410049   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:53.788313   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:53.817859   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:53.830060   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:53.910185   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:54.240830   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:54.319830   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:54.322272   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:54.410421   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:54.741729   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:54.820649   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:54.833061   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:54.911591   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:55.240736   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:55.319374   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:55.323472   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:55.409874   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:55.741433   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:55.818648   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:55.833669   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:55.911754   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:56.241618   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:56.319215   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:56.322832   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:56.410464   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:56.743096   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:56.818373   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:56.822182   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:56.910211   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:57.244243   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:57.318682   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:57.321938   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:57.410503   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:57.740938   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:57.819294   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:57.822723   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:57.910425   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:58.240947   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:58.318902   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:58.322067   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:58.409945   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:58.740964   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:58.821978   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:58.823223   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:58.911178   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:59.241890   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:59.318781   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:59.326368   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:59.410777   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 18:59:59.747535   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 18:59:59.817917   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 18:59:59.820963   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 18:59:59.910267   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:00.242071   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:00.319242   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:00.322441   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:00.413667   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:00.739930   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:00.818296   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:00.821481   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:00.910084   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:01.240689   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:01.325315   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:01.327383   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:01.419196   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:01.741441   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:01.817529   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:01.821915   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:01.911635   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:02.240933   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:02.320445   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:02.323145   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:02.410979   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:02.744406   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:02.819489   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:02.822427   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:02.912826   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:03.240539   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:03.319789   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:03.321461   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:03.409885   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:03.740689   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:03.818755   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:03.822914   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:03.910287   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:04.241429   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:04.318345   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:04.323592   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:04.410653   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:04.744382   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:04.821362   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:04.825677   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:04.911053   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:05.240862   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:05.319547   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:05.323153   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:05.410493   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:05.740243   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:05.818581   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:05.822035   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:05.910723   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:06.240411   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:06.318162   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:06.321508   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:06.409718   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:06.740646   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:06.827738   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:06.828468   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:06.909839   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:07.639252   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:07.657142   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:07.665601   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:07.666576   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:07.741173   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:07.818936   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:07.822953   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:07.910548   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:08.239974   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:08.318437   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:08.321958   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:08.409605   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:08.740693   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:08.818582   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:08.821961   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:08.910200   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:09.243722   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:09.319431   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:09.325587   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:09.410020   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:10.077346   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:10.078671   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:10.080767   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:10.083607   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:10.239922   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:10.318922   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:10.322763   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:10.409876   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:10.740612   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:10.818486   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:10.825683   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:10.910175   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:11.249300   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:11.317917   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:11.321814   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:11.410716   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:11.740506   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:11.820214   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:11.822093   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:11.910344   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:12.242406   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:12.318649   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:12.321972   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:12.411860   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:12.742324   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:12.818185   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:12.822149   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:12.910504   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:13.241505   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:13.318248   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:13.321911   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:13.411500   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:13.740192   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:13.818548   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:13.822467   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:13.911221   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:14.241144   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:14.319171   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:14.322689   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:14.410013   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:14.742619   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:14.818204   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:14.822383   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:14.910204   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:15.242214   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:15.331622   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:15.331818   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:15.411320   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:15.748968   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:15.826002   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:15.826924   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:15.910317   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:16.241010   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:16.317337   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:16.321071   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:16.410054   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:16.740775   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:16.818483   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:16.823793   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:16.913417   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:17.241374   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:17.318718   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:17.322476   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:17.410491   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:17.740672   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:17.818926   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:17.826028   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:17.912337   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:18.243018   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:18.317510   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:18.325808   17285 kapi.go:107] duration metric: took 1m0.508855464s to wait for kubernetes.io/minikube-addons=registry ...
	I0103 19:00:18.410303   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:18.740394   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:18.818984   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:18.910159   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:19.241606   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:19.320020   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:19.412027   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:19.741354   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:19.818495   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:19.909657   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:20.240034   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:20.317563   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:20.411118   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:20.740849   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:20.819045   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:20.910562   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:21.240367   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:21.318996   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:21.410514   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:21.747872   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:21.824290   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:21.911210   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:22.240777   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:22.320229   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:22.410420   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:22.739869   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:22.819686   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:22.910738   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:23.240331   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:23.319775   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:23.411013   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:23.741386   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:23.818900   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:23.911499   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:24.241750   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:24.321921   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:24.410868   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:24.740307   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:24.818649   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:24.910436   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:25.239910   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:25.319008   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:25.411143   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:25.740257   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:25.819330   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:25.913563   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:26.241406   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:26.318658   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:26.410133   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:26.751475   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:26.818421   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:26.910466   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:27.241087   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:27.318746   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:27.410020   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:27.740836   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:27.819317   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:27.911744   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:28.243240   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:28.322543   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:28.410415   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:28.739810   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:28.818531   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:28.910646   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:29.249348   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:29.318073   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:29.410112   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:29.740753   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:29.818383   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:29.910412   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:30.240413   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:30.318313   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:30.410877   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:30.740623   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:30.819814   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:30.909737   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:31.243614   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:31.319147   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:31.410949   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:31.742239   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:31.819997   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:31.914464   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:32.239892   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:32.319578   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:32.410545   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:32.740423   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:32.818089   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:32.909587   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:33.240627   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:33.318122   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:33.410761   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:33.740790   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:33.818459   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:33.913595   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:34.241931   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:34.318656   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:34.409962   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:34.742982   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:34.818928   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:34.910213   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:35.432873   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:35.433032   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:35.433112   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:35.741640   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:35.818543   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:35.913820   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:36.243727   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:36.323491   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:36.410557   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:36.744698   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:36.819592   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:36.911217   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:37.241000   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:37.321777   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:37.409956   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:37.741593   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:37.825095   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:37.911713   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:38.240790   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:38.321611   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:38.411555   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:38.741790   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:38.819245   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:38.909649   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:39.241024   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:39.318677   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:39.410150   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:39.761322   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:39.817464   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:39.910477   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:40.244371   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:40.317807   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:40.410337   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:40.741466   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:40.818423   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:40.911127   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:41.241020   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:41.318235   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:41.410009   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:41.740523   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:41.818472   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:41.910722   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:42.240742   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:42.433684   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:42.433945   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:42.740846   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:42.818423   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:42.910600   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:43.240882   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:43.319230   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:43.409824   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:43.741147   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:43.817951   17285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:43.910072   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:44.241547   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:44.318117   17285 kapi.go:107] duration metric: took 1m26.506733579s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0103 19:00:44.410254   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:44.741220   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:44.922028   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:45.241179   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:45.410227   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:45.741539   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:45.910471   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:46.240387   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:46.409974   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:46.741346   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:46.911317   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:47.241890   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:47.409907   17285 kapi.go:107] duration metric: took 1m26.503991918s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0103 19:00:47.411896   17285 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-848866 cluster.
	I0103 19:00:47.413808   17285 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0103 19:00:47.415530   17285 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0103 19:00:47.756873   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:48.240187   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:48.741225   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:49.240953   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:49.746927   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:50.242871   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:50.740889   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:51.240356   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:51.741511   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:52.241941   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:52.742581   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:53.242041   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:53.740087   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:54.240851   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:54.741006   17285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:55.241250   17285 kapi.go:107] duration metric: took 1m36.50686129s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0103 19:00:55.243224   17285 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, helm-tiller, yakd, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0103 19:00:55.244801   17285 addons.go:508] enable addons completed in 1m46.912257041s: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner helm-tiller yakd inspektor-gadget metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0103 19:00:55.244840   17285 start.go:233] waiting for cluster config update ...
	I0103 19:00:55.244857   17285 start.go:242] writing updated cluster config ...
	I0103 19:00:55.245109   17285 ssh_runner.go:195] Run: rm -f paused
	I0103 19:00:55.295843   17285 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 19:00:55.298075   17285 out.go:177] * Done! kubectl is now configured to use "addons-848866" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 18:58:23 UTC, ends at Wed 2024-01-03 19:03:43 UTC. --
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.411940395Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4d731ca9-325e-445c-a132-8ed5135d8bdf name=/runtime.v1.RuntimeService/Version
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.413464719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3c21d2e2-4173-448b-9313-aec098fcab2a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.415045061Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704308623415024385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575394,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=3c21d2e2-4173-448b-9313-aec098fcab2a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.415772779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4697d511-5a05-4373-8156-a7a2f97e45a9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.415825765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4697d511-5a05-4373-8156-a7a2f97e45a9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.416195480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b83a1ac3609f181a31a6cc591d3065c0248ca4169c2d067c6383a659143df479,PodSandboxId:295c1ad45be0d078d077898267e3b95fe47f0647aec3a5a488b6c49166132e70,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704308616256209666,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-62spc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6bb489f4-50a5-4948-9083-b18c3026149a,},Annotations:map[string]string{io.kubernetes.container.hash: cdc1fb88,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed2e2e79b67a9d75e511dda8375a2738b455c8e262e3590f303dd169c882a1b7,PodSandboxId:919214340cbdd002b94fb06ff5e5ee36ccc638899bca18e5955dea8a3d2959ad,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704308494194471475,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-87ghr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab26fec3-2021-46e5-a32f-d3e34f48e93a,},An
notations:map[string]string{io.kubernetes.container.hash: c556ebd0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6e4afab7f9bac9af73482d1198fc5d1d3f89a455efb0ca67937d0a7350ac7,PodSandboxId:6abf0c4f70d03012f9dd6b9cb64171f2383355dd2b92519d0594e38a1eedbb9e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704308475465555153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 490ea700-5fcd-4561-baf8-e43b2d4aafd3,},Annotations:map[string]string{io.kubernetes.container.hash: d4853a49,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fbc45be3f933d942336cf11de75e7f98e09833f6cd09f5a6bf2e88e2be067c7,PodSandboxId:fb45ab9a76bed8c11b6c85d01071c01d075885b8556daff81f1c37ef6b1e2b82,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704308446700522473,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-tkzlz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36403fef-30b8-44ea-98a7-d403256ef3ae,},Annotations:map[string]string{io.kubernetes.container.hash: ffa8534,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e57cc7657f96b0fad9ceaa354db9bc3496a5c2a0c5546e4131ff0decf044a1,PodSandboxId:2ac92e2abd809ced002f71622334c6d07a3616c9ce78bb313998ca957833aee4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704308431771549114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fgw9d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 777eda28-9582-4636-9660-a3f6c02493d3,},Annotations:map[string]string{io.kubernetes.container.hash: be6144c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a959ee4f111041c7ac2ec064a65574af6063635a8f40d72093dd9f50a55611bd,PodSandboxId:b35682dbfde377ba98f68f73ed1e3ba38e0882228c04ad1480a643015666871b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1704308431598050711,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-dp6kw,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f957bc4d-6bb0-4168-8148-5b943f964163,},Annotations:map[string]string{io.kubernetes.container.hash: b0d3fc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a1e07b4e860d97f8ed9976ce31a11d5cc202c68393a1552467f0228ef3a253,PodSandboxId:606927e3e8e8796c6f100d5aab311889cfa4cbf4144069d3aa53ff43b52adc5c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704308421289509987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-txhjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d5a1c0ea-fa65-4036-ae3d-9be627b91b6d,},Annotations:map[string]string{io.kubernetes.container.hash: cd863b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9b0e585a735682eb7c82450daa50fa3f5e7663e970ee66a1618b2199238013,PodSandboxId:000b98598fd0d90e89fafa36ff86b2b4c30d5f9103db251323c11929a891b4c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a56
2,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704308369791681172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1deab566-1a42-4d50-a45b-a772cea4cee3,},Annotations:map[string]string{io.kubernetes.container.hash: 6d9b9462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7e9b2c3e918e877e7f8cd7bbc8f2a75aa694dad27408d61022a0eeac151067,PodSandboxId:5022031d045868aed77dba4ab7b05de73f4a565879628e5fb77bbb012c4a44c6,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704308369921995345,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-lc6bn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: bdd000ec-a410-46ec-a4a2-558160f3340f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b9b6fe8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b741bb9b1640d9d305a4670e515b26640d5df2a035b72155215fc469eeac3d1,PodSandboxId:1f8b2615e18ec965f0af2443c8d8cdb97b8c3acf123db00e96372498482bb36c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},I
mage:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704308364119468482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn4pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec9c5db-0675-4813-a8b7-808b6525239a,},Annotations:map[string]string{io.kubernetes.container.hash: f9eab8f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a5683a8bddc3cf24f4294ea736087ea8d93abd9f867d61d2fe1e7787aa9e29b,PodSandboxId:847eb1d0d8ba0fd483f3eca31bcfeb9ff331f2d0348356c54a8a27ae43aa0490,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd1
73874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704308351512188357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-65cqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb26394-bfc6-4d70-8a66-a3643c421b4a,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb6370e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e832e826d41c6c845f9164efe972fc267a6
e400a4dae895dbe55a591224657,PodSandboxId:840bcc6b787e2007ecfc1544f9a3af3f321ffd4d7093b3916b39d62baaa50027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704308328199319190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e25cfb30ac26fe9923d00728398f63,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3042fca25142cde932c9fe841383015649dba3d2d7313a14a7
679424d8fe40,PodSandboxId:64aceb59a1e7fb2f9146eeaab435fa71e4164f59d722a390121b4cefceed1300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704308327949971299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a789364724f4c86fe6ead029c4dd7c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 643373f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f11a2d4e0986d6f697a40b40255091126f5bdebb03e6cfc5d6e88114a46e44d,PodSandboxId:fe163e82f9635b8b90ea60f4f6ab4
b7b4ef5fa4a46d4be2b49e88e2e4ad75cf2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704308327763332830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b039a72c67ee8f421b516dc31e1f88b8,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0ebd78849bf140274d533657f699a8113c57aadbcca35b80fb7884971edcd7,PodSandboxId:88a7f6742f4c972885d0e836625bf282fa09fcacb8b2
8a68b73b084bd42d7460,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704308327782717940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30deac1d048d63d87efdc0ffc146ff2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4697d511-5a05-4373-8156-a7a2f97e45a9 name=/runtime.v1.RuntimeService/
ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.450751753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=89f0ade2-a88f-4e23-9044-8fd04c981044 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.450812466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=89f0ade2-a88f-4e23-9044-8fd04c981044 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.452313679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e6a945c6-99ae-44a5-bb17-8c25c0f6e592 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.453510487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704308623453493042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575394,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=e6a945c6-99ae-44a5-bb17-8c25c0f6e592 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.454022492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c7c9b577-f393-4098-a5c8-c00bf2b1fc1f name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.454076743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c7c9b577-f393-4098-a5c8-c00bf2b1fc1f name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.454399422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b83a1ac3609f181a31a6cc591d3065c0248ca4169c2d067c6383a659143df479,PodSandboxId:295c1ad45be0d078d077898267e3b95fe47f0647aec3a5a488b6c49166132e70,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704308616256209666,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-62spc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6bb489f4-50a5-4948-9083-b18c3026149a,},Annotations:map[string]string{io.kubernetes.container.hash: cdc1fb88,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed2e2e79b67a9d75e511dda8375a2738b455c8e262e3590f303dd169c882a1b7,PodSandboxId:919214340cbdd002b94fb06ff5e5ee36ccc638899bca18e5955dea8a3d2959ad,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704308494194471475,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-87ghr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab26fec3-2021-46e5-a32f-d3e34f48e93a,},An
notations:map[string]string{io.kubernetes.container.hash: c556ebd0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6e4afab7f9bac9af73482d1198fc5d1d3f89a455efb0ca67937d0a7350ac7,PodSandboxId:6abf0c4f70d03012f9dd6b9cb64171f2383355dd2b92519d0594e38a1eedbb9e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704308475465555153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 490ea700-5fcd-4561-baf8-e43b2d4aafd3,},Annotations:map[string]string{io.kubernetes.container.hash: d4853a49,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fbc45be3f933d942336cf11de75e7f98e09833f6cd09f5a6bf2e88e2be067c7,PodSandboxId:fb45ab9a76bed8c11b6c85d01071c01d075885b8556daff81f1c37ef6b1e2b82,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704308446700522473,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-tkzlz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36403fef-30b8-44ea-98a7-d403256ef3ae,},Annotations:map[string]string{io.kubernetes.container.hash: ffa8534,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e57cc7657f96b0fad9ceaa354db9bc3496a5c2a0c5546e4131ff0decf044a1,PodSandboxId:2ac92e2abd809ced002f71622334c6d07a3616c9ce78bb313998ca957833aee4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704308431771549114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fgw9d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 777eda28-9582-4636-9660-a3f6c02493d3,},Annotations:map[string]string{io.kubernetes.container.hash: be6144c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a959ee4f111041c7ac2ec064a65574af6063635a8f40d72093dd9f50a55611bd,PodSandboxId:b35682dbfde377ba98f68f73ed1e3ba38e0882228c04ad1480a643015666871b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1704308431598050711,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-dp6kw,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f957bc4d-6bb0-4168-8148-5b943f964163,},Annotations:map[string]string{io.kubernetes.container.hash: b0d3fc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a1e07b4e860d97f8ed9976ce31a11d5cc202c68393a1552467f0228ef3a253,PodSandboxId:606927e3e8e8796c6f100d5aab311889cfa4cbf4144069d3aa53ff43b52adc5c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704308421289509987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-txhjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d5a1c0ea-fa65-4036-ae3d-9be627b91b6d,},Annotations:map[string]string{io.kubernetes.container.hash: cd863b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9b0e585a735682eb7c82450daa50fa3f5e7663e970ee66a1618b2199238013,PodSandboxId:000b98598fd0d90e89fafa36ff86b2b4c30d5f9103db251323c11929a891b4c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a56
2,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704308369791681172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1deab566-1a42-4d50-a45b-a772cea4cee3,},Annotations:map[string]string{io.kubernetes.container.hash: 6d9b9462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7e9b2c3e918e877e7f8cd7bbc8f2a75aa694dad27408d61022a0eeac151067,PodSandboxId:5022031d045868aed77dba4ab7b05de73f4a565879628e5fb77bbb012c4a44c6,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704308369921995345,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-lc6bn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: bdd000ec-a410-46ec-a4a2-558160f3340f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b9b6fe8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b741bb9b1640d9d305a4670e515b26640d5df2a035b72155215fc469eeac3d1,PodSandboxId:1f8b2615e18ec965f0af2443c8d8cdb97b8c3acf123db00e96372498482bb36c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},I
mage:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704308364119468482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn4pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec9c5db-0675-4813-a8b7-808b6525239a,},Annotations:map[string]string{io.kubernetes.container.hash: f9eab8f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a5683a8bddc3cf24f4294ea736087ea8d93abd9f867d61d2fe1e7787aa9e29b,PodSandboxId:847eb1d0d8ba0fd483f3eca31bcfeb9ff331f2d0348356c54a8a27ae43aa0490,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd1
73874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704308351512188357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-65cqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb26394-bfc6-4d70-8a66-a3643c421b4a,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb6370e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e832e826d41c6c845f9164efe972fc267a6
e400a4dae895dbe55a591224657,PodSandboxId:840bcc6b787e2007ecfc1544f9a3af3f321ffd4d7093b3916b39d62baaa50027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704308328199319190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e25cfb30ac26fe9923d00728398f63,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3042fca25142cde932c9fe841383015649dba3d2d7313a14a7
679424d8fe40,PodSandboxId:64aceb59a1e7fb2f9146eeaab435fa71e4164f59d722a390121b4cefceed1300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704308327949971299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a789364724f4c86fe6ead029c4dd7c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 643373f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f11a2d4e0986d6f697a40b40255091126f5bdebb03e6cfc5d6e88114a46e44d,PodSandboxId:fe163e82f9635b8b90ea60f4f6ab4
b7b4ef5fa4a46d4be2b49e88e2e4ad75cf2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704308327763332830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b039a72c67ee8f421b516dc31e1f88b8,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0ebd78849bf140274d533657f699a8113c57aadbcca35b80fb7884971edcd7,PodSandboxId:88a7f6742f4c972885d0e836625bf282fa09fcacb8b2
8a68b73b084bd42d7460,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704308327782717940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30deac1d048d63d87efdc0ffc146ff2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c7c9b577-f393-4098-a5c8-c00bf2b1fc1f name=/runtime.v1.RuntimeService/
ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.481538840Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=5e9717dd-8a7a-405d-809c-038c7851f033 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.481991718Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:295c1ad45be0d078d077898267e3b95fe47f0647aec3a5a488b6c49166132e70,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-62spc,Uid:6bb489f4-50a5-4948-9083-b18c3026149a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308613428162898,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-62spc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6bb489f4-50a5-4948-9083-b18c3026149a,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:03:33.073267498Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:919214340cbdd002b94fb06ff5e5ee36ccc638899bca18e5955dea8a3d2959ad,Metadata:&PodSandboxMetadata{Name:headlamp-7ddfbb94ff-87ghr,Uid:ab26fec3-2021-46e5-a32f-d3e34f48e93a,Namespace
:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308487299489376,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7ddfbb94ff-87ghr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab26fec3-2021-46e5-a32f-d3e34f48e93a,pod-template-hash: 7ddfbb94ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:01:26.969197427Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6abf0c4f70d03012f9dd6b9cb64171f2383355dd2b92519d0594e38a1eedbb9e,Metadata:&PodSandboxMetadata{Name:nginx,Uid:490ea700-5fcd-4561-baf8-e43b2d4aafd3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308470357196147,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 490ea700-5fcd-4561-baf8-e43b2d4aafd3,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
01-03T19:01:10.012556028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fb45ab9a76bed8c11b6c85d01071c01d075885b8556daff81f1c37ef6b1e2b82,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-tkzlz,Uid:36403fef-30b8-44ea-98a7-d403256ef3ae,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308425034749964,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-tkzlz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36403fef-30b8-44ea-98a7-d403256ef3ae,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T18:59:20.797128310Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e379358921ba1681552d4dce2c860a4d6808813ac7cf334c823a213f1dee0d15,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-69cff4fd79-gx2sf,Uid:b58e2abe-841e-4690-aca4-4769a5333a4f,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTRE
ADY,CreatedAt:1704308421567642089,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-69cff4fd79-gx2sf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b58e2abe-841e-4690-aca4-4769a5333a4f,pod-template-hash: 69cff4fd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T18:59:17.628849074Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ac92e2abd809ced002f71622334c6d07a3616c9ce78bb313998ca957833aee4,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-fgw9d,Uid:777eda28-9582-4636-9660-a3f6c02493d3,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1704308358081309003,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kube
rnetes.io/controller-uid: 627fefb3-1d7d-40a6-8806-a0747b7477b3,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 627fefb3-1d7d-40a6-8806-a0747b7477b3,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-fgw9d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 777eda28-9582-4636-9660-a3f6c02493d3,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T18:59:17.712672584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:606927e3e8e8796c6f100d5aab311889cfa4cbf4144069d3aa53ff43b52adc5c,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-txhjq,Uid:d5a1c0ea-fa65-4036-ae3d-9be627b91b6d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1704308358014460457,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid:
b4e50970-439d-4fcc-a960-903fa00a8a94,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: b4e50970-439d-4fcc-a960-903fa00a8a94,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-txhjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d5a1c0ea-fa65-4036-ae3d-9be627b91b6d,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T18:59:17.673958866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5022031d045868aed77dba4ab7b05de73f4a565879628e5fb77bbb012c4a44c6,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-9947fc6bf-lc6bn,Uid:bdd000ec-a410-46ec-a4a2-558160f3340f,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308357898244609,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-
lc6bn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: bdd000ec-a410-46ec-a4a2-558160f3340f,pod-template-hash: 9947fc6bf,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T18:59:16.662657319Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b35682dbfde377ba98f68f73ed1e3ba38e0882228c04ad1480a643015666871b,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-78b46b4d5c-dp6kw,Uid:f957bc4d-6bb0-4168-8148-5b943f964163,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308356999705995,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-dp6kw,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f957bc4d-6bb0-4168-8148-5b943f964163,pod-template-hash: 78b46b4d5c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T18:59:16.056828038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:000b98598
fd0d90e89fafa36ff86b2b4c30d5f9103db251323c11929a891b4c4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1deab566-1a42-4d50-a45b-a772cea4cee3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308356589576082,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1deab566-1a42-4d50-a45b-a772cea4cee3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storag
e-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-03T18:59:16.204003037Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1aa001e902c768990b80695b4093c5867ca0b0dddaad9d3f5c62f93be17e9137,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:7fb40f5c-ea06-451a-bf9d-4ccd66d89336,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1704308355513039821,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fb40f5c-ea06-451a-bf9d-4ccd66d89336,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metada
ta\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-01-03T18:59:14.877025469Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:847eb1d0d8ba0fd483f3eca31bcfeb9ff331f2d0348356c54a8a27ae43aa0490,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-65cqq,Uid:2fb26394-bfc6-4d70-8a66-a3643c421b4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_R
EADY,CreatedAt:1704308348527360069,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-65cqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb26394-bfc6-4d70-8a66-a3643c421b4a,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T18:59:08.188732090Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f8b2615e18ec965f0af2443c8d8cdb97b8c3acf123db00e96372498482bb36c,Metadata:&PodSandboxMetadata{Name:kube-proxy-mn4pd,Uid:4ec9c5db-0675-4813-a8b7-808b6525239a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308348457254385,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mn4pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec9c5db-0675-4813-a8b7-808b6525239a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/co
nfig.seen: 2024-01-03T18:59:08.035653683Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe163e82f9635b8b90ea60f4f6ab4b7b4ef5fa4a46d4be2b49e88e2e4ad75cf2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-848866,Uid:b039a72c67ee8f421b516dc31e1f88b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308327241539576,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b039a72c67ee8f421b516dc31e1f88b8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.253:8443,kubernetes.io/config.hash: b039a72c67ee8f421b516dc31e1f88b8,kubernetes.io/config.seen: 2024-01-03T18:58:46.694810990Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64aceb59a1e7fb2f9146eeaab435fa71e4164f59d722a390121b4cefceed1300,Metadata:&PodSandboxMetadata{Name:etcd-ad
dons-848866,Uid:a789364724f4c86fe6ead029c4dd7c7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308327231813056,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a789364724f4c86fe6ead029c4dd7c7f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.253:2379,kubernetes.io/config.hash: a789364724f4c86fe6ead029c4dd7c7f,kubernetes.io/config.seen: 2024-01-03T18:58:46.694817277Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88a7f6742f4c972885d0e836625bf282fa09fcacb8b28a68b73b084bd42d7460,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-848866,Uid:e30deac1d048d63d87efdc0ffc146ff2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308327222410151,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: kube-controller-manager-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30deac1d048d63d87efdc0ffc146ff2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e30deac1d048d63d87efdc0ffc146ff2,kubernetes.io/config.seen: 2024-01-03T18:58:46.694815355Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:840bcc6b787e2007ecfc1544f9a3af3f321ffd4d7093b3916b39d62baaa50027,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-848866,Uid:b4e25cfb30ac26fe9923d00728398f63,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704308327209442285,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e25cfb30ac26fe9923d00728398f63,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b4e25cfb30ac26fe9923d00728398f63,kubernetes.io/config.seen: 2024-01-03
T18:58:46.694816222Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=5e9717dd-8a7a-405d-809c-038c7851f033 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.483015377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=72de3b04-40a3-487c-8606-3ca4f15deee0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.483070182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=72de3b04-40a3-487c-8606-3ca4f15deee0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.483480603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b83a1ac3609f181a31a6cc591d3065c0248ca4169c2d067c6383a659143df479,PodSandboxId:295c1ad45be0d078d077898267e3b95fe47f0647aec3a5a488b6c49166132e70,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704308616256209666,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-62spc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6bb489f4-50a5-4948-9083-b18c3026149a,},Annotations:map[string]string{io.kubernetes.container.hash: cdc1fb88,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed2e2e79b67a9d75e511dda8375a2738b455c8e262e3590f303dd169c882a1b7,PodSandboxId:919214340cbdd002b94fb06ff5e5ee36ccc638899bca18e5955dea8a3d2959ad,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704308494194471475,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-87ghr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab26fec3-2021-46e5-a32f-d3e34f48e93a,},An
notations:map[string]string{io.kubernetes.container.hash: c556ebd0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6e4afab7f9bac9af73482d1198fc5d1d3f89a455efb0ca67937d0a7350ac7,PodSandboxId:6abf0c4f70d03012f9dd6b9cb64171f2383355dd2b92519d0594e38a1eedbb9e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704308475465555153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 490ea700-5fcd-4561-baf8-e43b2d4aafd3,},Annotations:map[string]string{io.kubernetes.container.hash: d4853a49,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fbc45be3f933d942336cf11de75e7f98e09833f6cd09f5a6bf2e88e2be067c7,PodSandboxId:fb45ab9a76bed8c11b6c85d01071c01d075885b8556daff81f1c37ef6b1e2b82,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704308446700522473,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-tkzlz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36403fef-30b8-44ea-98a7-d403256ef3ae,},Annotations:map[string]string{io.kubernetes.container.hash: ffa8534,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e57cc7657f96b0fad9ceaa354db9bc3496a5c2a0c5546e4131ff0decf044a1,PodSandboxId:2ac92e2abd809ced002f71622334c6d07a3616c9ce78bb313998ca957833aee4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704308431771549114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fgw9d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 777eda28-9582-4636-9660-a3f6c02493d3,},Annotations:map[string]string{io.kubernetes.container.hash: be6144c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a959ee4f111041c7ac2ec064a65574af6063635a8f40d72093dd9f50a55611bd,PodSandboxId:b35682dbfde377ba98f68f73ed1e3ba38e0882228c04ad1480a643015666871b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1704308431598050711,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-dp6kw,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f957bc4d-6bb0-4168-8148-5b943f964163,},Annotations:map[string]string{io.kubernetes.container.hash: b0d3fc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a1e07b4e860d97f8ed9976ce31a11d5cc202c68393a1552467f0228ef3a253,PodSandboxId:606927e3e8e8796c6f100d5aab311889cfa4cbf4144069d3aa53ff43b52adc5c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704308421289509987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-txhjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d5a1c0ea-fa65-4036-ae3d-9be627b91b6d,},Annotations:map[string]string{io.kubernetes.container.hash: cd863b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9b0e585a735682eb7c82450daa50fa3f5e7663e970ee66a1618b2199238013,PodSandboxId:000b98598fd0d90e89fafa36ff86b2b4c30d5f9103db251323c11929a891b4c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a56
2,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704308369791681172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1deab566-1a42-4d50-a45b-a772cea4cee3,},Annotations:map[string]string{io.kubernetes.container.hash: 6d9b9462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7e9b2c3e918e877e7f8cd7bbc8f2a75aa694dad27408d61022a0eeac151067,PodSandboxId:5022031d045868aed77dba4ab7b05de73f4a565879628e5fb77bbb012c4a44c6,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704308369921995345,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-lc6bn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: bdd000ec-a410-46ec-a4a2-558160f3340f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b9b6fe8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b741bb9b1640d9d305a4670e515b26640d5df2a035b72155215fc469eeac3d1,PodSandboxId:1f8b2615e18ec965f0af2443c8d8cdb97b8c3acf123db00e96372498482bb36c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},I
mage:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704308364119468482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn4pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec9c5db-0675-4813-a8b7-808b6525239a,},Annotations:map[string]string{io.kubernetes.container.hash: f9eab8f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a5683a8bddc3cf24f4294ea736087ea8d93abd9f867d61d2fe1e7787aa9e29b,PodSandboxId:847eb1d0d8ba0fd483f3eca31bcfeb9ff331f2d0348356c54a8a27ae43aa0490,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd1
73874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704308351512188357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-65cqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb26394-bfc6-4d70-8a66-a3643c421b4a,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb6370e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e832e826d41c6c845f9164efe972fc267a6
e400a4dae895dbe55a591224657,PodSandboxId:840bcc6b787e2007ecfc1544f9a3af3f321ffd4d7093b3916b39d62baaa50027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704308328199319190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e25cfb30ac26fe9923d00728398f63,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3042fca25142cde932c9fe841383015649dba3d2d7313a14a7
679424d8fe40,PodSandboxId:64aceb59a1e7fb2f9146eeaab435fa71e4164f59d722a390121b4cefceed1300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704308327949971299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a789364724f4c86fe6ead029c4dd7c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 643373f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f11a2d4e0986d6f697a40b40255091126f5bdebb03e6cfc5d6e88114a46e44d,PodSandboxId:fe163e82f9635b8b90ea60f4f6ab4
b7b4ef5fa4a46d4be2b49e88e2e4ad75cf2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704308327763332830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b039a72c67ee8f421b516dc31e1f88b8,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0ebd78849bf140274d533657f699a8113c57aadbcca35b80fb7884971edcd7,PodSandboxId:88a7f6742f4c972885d0e836625bf282fa09fcacb8b2
8a68b73b084bd42d7460,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704308327782717940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30deac1d048d63d87efdc0ffc146ff2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=72de3b04-40a3-487c-8606-3ca4f15deee0 name=/runtime.v1.RuntimeService/
ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.496353583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=93a0dfc4-e6d9-46ef-acee-e87df42b7144 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.496412112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=93a0dfc4-e6d9-46ef-acee-e87df42b7144 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.497422255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=47de1718-3b6b-4e34-a7dd-ea426a5ca343 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.498612885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704308623498596629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575394,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=47de1718-3b6b-4e34-a7dd-ea426a5ca343 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.499394615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=68dbb552-e623-4a5a-a63e-365176458e95 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.499516984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=68dbb552-e623-4a5a-a63e-365176458e95 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:03:43 addons-848866 crio[712]: time="2024-01-03 19:03:43.499840697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b83a1ac3609f181a31a6cc591d3065c0248ca4169c2d067c6383a659143df479,PodSandboxId:295c1ad45be0d078d077898267e3b95fe47f0647aec3a5a488b6c49166132e70,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704308616256209666,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-62spc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6bb489f4-50a5-4948-9083-b18c3026149a,},Annotations:map[string]string{io.kubernetes.container.hash: cdc1fb88,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed2e2e79b67a9d75e511dda8375a2738b455c8e262e3590f303dd169c882a1b7,PodSandboxId:919214340cbdd002b94fb06ff5e5ee36ccc638899bca18e5955dea8a3d2959ad,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704308494194471475,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-87ghr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab26fec3-2021-46e5-a32f-d3e34f48e93a,},An
notations:map[string]string{io.kubernetes.container.hash: c556ebd0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6e4afab7f9bac9af73482d1198fc5d1d3f89a455efb0ca67937d0a7350ac7,PodSandboxId:6abf0c4f70d03012f9dd6b9cb64171f2383355dd2b92519d0594e38a1eedbb9e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704308475465555153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 490ea700-5fcd-4561-baf8-e43b2d4aafd3,},Annotations:map[string]string{io.kubernetes.container.hash: d4853a49,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fbc45be3f933d942336cf11de75e7f98e09833f6cd09f5a6bf2e88e2be067c7,PodSandboxId:fb45ab9a76bed8c11b6c85d01071c01d075885b8556daff81f1c37ef6b1e2b82,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704308446700522473,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-tkzlz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36403fef-30b8-44ea-98a7-d403256ef3ae,},Annotations:map[string]string{io.kubernetes.container.hash: ffa8534,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e57cc7657f96b0fad9ceaa354db9bc3496a5c2a0c5546e4131ff0decf044a1,PodSandboxId:2ac92e2abd809ced002f71622334c6d07a3616c9ce78bb313998ca957833aee4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704308431771549114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fgw9d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 777eda28-9582-4636-9660-a3f6c02493d3,},Annotations:map[string]string{io.kubernetes.container.hash: be6144c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a959ee4f111041c7ac2ec064a65574af6063635a8f40d72093dd9f50a55611bd,PodSandboxId:b35682dbfde377ba98f68f73ed1e3ba38e0882228c04ad1480a643015666871b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1704308431598050711,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-dp6kw,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f957bc4d-6bb0-4168-8148-5b943f964163,},Annotations:map[string]string{io.kubernetes.container.hash: b0d3fc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a1e07b4e860d97f8ed9976ce31a11d5cc202c68393a1552467f0228ef3a253,PodSandboxId:606927e3e8e8796c6f100d5aab311889cfa4cbf4144069d3aa53ff43b52adc5c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704308421289509987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-txhjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d5a1c0ea-fa65-4036-ae3d-9be627b91b6d,},Annotations:map[string]string{io.kubernetes.container.hash: cd863b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9b0e585a735682eb7c82450daa50fa3f5e7663e970ee66a1618b2199238013,PodSandboxId:000b98598fd0d90e89fafa36ff86b2b4c30d5f9103db251323c11929a891b4c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a56
2,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704308369791681172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1deab566-1a42-4d50-a45b-a772cea4cee3,},Annotations:map[string]string{io.kubernetes.container.hash: 6d9b9462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7e9b2c3e918e877e7f8cd7bbc8f2a75aa694dad27408d61022a0eeac151067,PodSandboxId:5022031d045868aed77dba4ab7b05de73f4a565879628e5fb77bbb012c4a44c6,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704308369921995345,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-lc6bn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: bdd000ec-a410-46ec-a4a2-558160f3340f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b9b6fe8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b741bb9b1640d9d305a4670e515b26640d5df2a035b72155215fc469eeac3d1,PodSandboxId:1f8b2615e18ec965f0af2443c8d8cdb97b8c3acf123db00e96372498482bb36c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},I
mage:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704308364119468482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn4pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec9c5db-0675-4813-a8b7-808b6525239a,},Annotations:map[string]string{io.kubernetes.container.hash: f9eab8f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a5683a8bddc3cf24f4294ea736087ea8d93abd9f867d61d2fe1e7787aa9e29b,PodSandboxId:847eb1d0d8ba0fd483f3eca31bcfeb9ff331f2d0348356c54a8a27ae43aa0490,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd1
73874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704308351512188357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-65cqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb26394-bfc6-4d70-8a66-a3643c421b4a,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb6370e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e832e826d41c6c845f9164efe972fc267a6
e400a4dae895dbe55a591224657,PodSandboxId:840bcc6b787e2007ecfc1544f9a3af3f321ffd4d7093b3916b39d62baaa50027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704308328199319190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e25cfb30ac26fe9923d00728398f63,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3042fca25142cde932c9fe841383015649dba3d2d7313a14a7
679424d8fe40,PodSandboxId:64aceb59a1e7fb2f9146eeaab435fa71e4164f59d722a390121b4cefceed1300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704308327949971299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a789364724f4c86fe6ead029c4dd7c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 643373f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f11a2d4e0986d6f697a40b40255091126f5bdebb03e6cfc5d6e88114a46e44d,PodSandboxId:fe163e82f9635b8b90ea60f4f6ab4
b7b4ef5fa4a46d4be2b49e88e2e4ad75cf2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704308327763332830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b039a72c67ee8f421b516dc31e1f88b8,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0ebd78849bf140274d533657f699a8113c57aadbcca35b80fb7884971edcd7,PodSandboxId:88a7f6742f4c972885d0e836625bf282fa09fcacb8b2
8a68b73b084bd42d7460,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704308327782717940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-848866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30deac1d048d63d87efdc0ffc146ff2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=68dbb552-e623-4a5a-a63e-365176458e95 name=/runtime.v1.RuntimeService/
ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b83a1ac3609f1       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   295c1ad45be0d       hello-world-app-5d77478584-62spc
	ed2e2e79b67a9       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   919214340cbdd       headlamp-7ddfbb94ff-87ghr
	80d6e4afab7f9       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   6abf0c4f70d03       nginx
	8fbc45be3f933       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   fb45ab9a76bed       gcp-auth-d4c87556c-tkzlz
	a6e57cc7657f9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   2ac92e2abd809       ingress-nginx-admission-patch-fgw9d
	a959ee4f11104       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   b35682dbfde37       local-path-provisioner-78b46b4d5c-dp6kw
	e8a1e07b4e860       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   606927e3e8e87       ingress-nginx-admission-create-txhjq
	3a7e9b2c3e918       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   5022031d04586       yakd-dashboard-9947fc6bf-lc6bn
	1a9b0e585a735       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   000b98598fd0d       storage-provisioner
	6b741bb9b1640       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   1f8b2615e18ec       kube-proxy-mn4pd
	2a5683a8bddc3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   847eb1d0d8ba0       coredns-5dd5756b68-65cqq
	67e832e826d41       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   840bcc6b787e2       kube-scheduler-addons-848866
	5c3042fca2514       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   64aceb59a1e7f       etcd-addons-848866
	ec0ebd78849bf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   88a7f6742f4c9       kube-controller-manager-addons-848866
	8f11a2d4e0986       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   fe163e82f9635       kube-apiserver-addons-848866
	
	
	==> coredns [2a5683a8bddc3cf24f4294ea736087ea8d93abd9f867d61d2fe1e7787aa9e29b] <==
	[INFO] 10.244.0.8:56808 - 12288 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104481s
	[INFO] 10.244.0.8:51819 - 26305 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000138559s
	[INFO] 10.244.0.8:51819 - 40652 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000148851s
	[INFO] 10.244.0.8:53217 - 54932 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108686s
	[INFO] 10.244.0.8:53217 - 5781 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134215s
	[INFO] 10.244.0.8:38818 - 51080 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125935s
	[INFO] 10.244.0.8:38818 - 30601 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000095472s
	[INFO] 10.244.0.8:38896 - 29118 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118561s
	[INFO] 10.244.0.8:38896 - 31667 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000051002s
	[INFO] 10.244.0.8:40287 - 33055 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076427s
	[INFO] 10.244.0.8:40287 - 45597 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038292s
	[INFO] 10.244.0.8:42094 - 56092 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056417s
	[INFO] 10.244.0.8:42094 - 62994 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030994s
	[INFO] 10.244.0.8:33563 - 23205 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000072004s
	[INFO] 10.244.0.8:33563 - 37796 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000028581s
	[INFO] 10.244.0.22:36194 - 13966 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000291166s
	[INFO] 10.244.0.22:35542 - 32858 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000643481s
	[INFO] 10.244.0.22:46531 - 33707 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000215986s
	[INFO] 10.244.0.22:54113 - 29282 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117144s
	[INFO] 10.244.0.22:59260 - 13860 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00025121s
	[INFO] 10.244.0.22:33767 - 821 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081463s
	[INFO] 10.244.0.22:44895 - 31775 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000850051s
	[INFO] 10.244.0.22:52589 - 59998 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000592312s
	[INFO] 10.244.0.25:60668 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000667164s
	[INFO] 10.244.0.25:53321 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000170168s
	
	
	==> describe nodes <==
	Name:               addons-848866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-848866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=addons-848866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T18_58_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-848866
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 18:58:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-848866
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:03:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:02:30 +0000   Wed, 03 Jan 2024 18:58:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:02:30 +0000   Wed, 03 Jan 2024 18:58:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:02:30 +0000   Wed, 03 Jan 2024 18:58:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:02:30 +0000   Wed, 03 Jan 2024 18:58:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    addons-848866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e5eea721f9646da822d32798a345595
	  System UUID:                5e5eea72-1f96-46da-822d-32798a345595
	  Boot ID:                    2ab268e5-f4b7-4d70-990b-b6d5e6cbc347
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-62spc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-tkzlz                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  headlamp                    headlamp-7ddfbb94ff-87ghr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 coredns-5dd5756b68-65cqq                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m35s
	  kube-system                 etcd-addons-848866                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-apiserver-addons-848866               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-controller-manager-addons-848866      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-proxy-mn4pd                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-scheduler-addons-848866               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  local-path-storage          local-path-provisioner-78b46b4d5c-dp6kw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-lc6bn             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  Starting                 4m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m57s (x8 over 4m57s)  kubelet          Node addons-848866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s (x8 over 4m57s)  kubelet          Node addons-848866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s (x7 over 4m57s)  kubelet          Node addons-848866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m48s                  kubelet          Node addons-848866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s                  kubelet          Node addons-848866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s                  kubelet          Node addons-848866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m47s                  kubelet          Node addons-848866 status is now: NodeReady
	  Normal  RegisteredNode           4m36s                  node-controller  Node addons-848866 event: Registered Node addons-848866 in Controller
	
	
	==> dmesg <==
	[  +4.996379] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.898630] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.107295] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.136419] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.112080] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.202565] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[  +9.224233] systemd-fstab-generator[903]: Ignoring "noauto" for root device
	[  +9.256316] systemd-fstab-generator[1234]: Ignoring "noauto" for root device
	[Jan 3 18:59] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.105212] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.822131] kauditd_printk_skb: 15 callbacks suppressed
	[ +13.003140] kauditd_printk_skb: 16 callbacks suppressed
	[ +12.091685] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 3 19:00] kauditd_printk_skb: 9 callbacks suppressed
	[ +14.940189] kauditd_printk_skb: 22 callbacks suppressed
	[Jan 3 19:01] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.601762] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.556312] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.477328] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.971037] kauditd_printk_skb: 15 callbacks suppressed
	[ +21.180856] kauditd_printk_skb: 2 callbacks suppressed
	[Jan 3 19:02] kauditd_printk_skb: 12 callbacks suppressed
	[Jan 3 19:03] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [5c3042fca25142cde932c9fe841383015649dba3d2d7313a14a7679424d8fe40] <==
	{"level":"warn","ts":"2024-01-03T19:00:10.07126Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.078047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-03T19:00:10.071303Z","caller":"traceutil/trace.go:171","msg":"trace[2029116386] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1014; }","duration":"260.122925ms","start":"2024-01-03T19:00:09.811174Z","end":"2024-01-03T19:00:10.071297Z","steps":["trace[2029116386] 'agreement among raft nodes before linearized reading'  (duration: 260.033102ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:00:10.070806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.192581ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2024-01-03T19:00:10.071409Z","caller":"traceutil/trace.go:171","msg":"trace[566467202] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1014; }","duration":"166.795188ms","start":"2024-01-03T19:00:09.904605Z","end":"2024-01-03T19:00:10.0714Z","steps":["trace[566467202] 'agreement among raft nodes before linearized reading'  (duration: 166.151598ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:00:35.399564Z","caller":"traceutil/trace.go:171","msg":"trace[425998814] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"229.511599ms","start":"2024-01-03T19:00:35.170019Z","end":"2024-01-03T19:00:35.39953Z","steps":["trace[425998814] 'process raft request'  (duration: 229.34528ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:00:35.42109Z","caller":"traceutil/trace.go:171","msg":"trace[168362784] linearizableReadLoop","detail":"{readStateIndex:1169; appliedIndex:1168; }","duration":"187.026369ms","start":"2024-01-03T19:00:35.234048Z","end":"2024-01-03T19:00:35.421075Z","steps":["trace[168362784] 'read index received'  (duration: 167.009ms)","trace[168362784] 'applied index is now lower than readState.Index'  (duration: 20.01634ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T19:00:35.421192Z","caller":"traceutil/trace.go:171","msg":"trace[1363197161] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"236.52602ms","start":"2024-01-03T19:00:35.184658Z","end":"2024-01-03T19:00:35.421185Z","steps":["trace[1363197161] 'process raft request'  (duration: 227.873866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:00:35.421461Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.392653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82382"}
	{"level":"info","ts":"2024-01-03T19:00:35.421515Z","caller":"traceutil/trace.go:171","msg":"trace[513221860] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1130; }","duration":"187.48607ms","start":"2024-01-03T19:00:35.234022Z","end":"2024-01-03T19:00:35.421508Z","steps":["trace[513221860] 'agreement among raft nodes before linearized reading'  (duration: 187.270445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:00:35.421485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.877377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-848866\" ","response":"range_response_count:1 size:6931"}
	{"level":"info","ts":"2024-01-03T19:00:35.421595Z","caller":"traceutil/trace.go:171","msg":"trace[200894752] range","detail":"{range_begin:/registry/minions/addons-848866; range_end:; response_count:1; response_revision:1130; }","duration":"134.055451ms","start":"2024-01-03T19:00:35.28753Z","end":"2024-01-03T19:00:35.421586Z","steps":["trace[200894752] 'agreement among raft nodes before linearized reading'  (duration: 133.839852ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:00:35.421885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.002088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2024-01-03T19:00:35.422046Z","caller":"traceutil/trace.go:171","msg":"trace[1108668842] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1130; }","duration":"111.166283ms","start":"2024-01-03T19:00:35.310873Z","end":"2024-01-03T19:00:35.422039Z","steps":["trace[1108668842] 'agreement among raft nodes before linearized reading'  (duration: 110.974693ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:00:39.75046Z","caller":"traceutil/trace.go:171","msg":"trace[1927839268] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"221.138566ms","start":"2024-01-03T19:00:39.529302Z","end":"2024-01-03T19:00:39.75044Z","steps":["trace[1927839268] 'process raft request'  (duration: 221.022776ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:00:39.752323Z","caller":"traceutil/trace.go:171","msg":"trace[638661743] linearizableReadLoop","detail":"{readStateIndex:1194; appliedIndex:1193; }","duration":"100.420695ms","start":"2024-01-03T19:00:39.651892Z","end":"2024-01-03T19:00:39.752312Z","steps":["trace[638661743] 'read index received'  (duration: 98.594254ms)","trace[638661743] 'applied index is now lower than readState.Index'  (duration: 1.823015ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T19:00:39.752542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.663062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-03T19:00:39.752615Z","caller":"traceutil/trace.go:171","msg":"trace[1061616210] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1153; }","duration":"100.733614ms","start":"2024-01-03T19:00:39.651846Z","end":"2024-01-03T19:00:39.75258Z","steps":["trace[1061616210] 'agreement among raft nodes before linearized reading'  (duration: 100.641989ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:00:42.423131Z","caller":"traceutil/trace.go:171","msg":"trace[1087552943] linearizableReadLoop","detail":"{readStateIndex:1197; appliedIndex:1196; }","duration":"107.73005ms","start":"2024-01-03T19:00:42.315387Z","end":"2024-01-03T19:00:42.423117Z","steps":["trace[1087552943] 'read index received'  (duration: 107.4519ms)","trace[1087552943] 'applied index is now lower than readState.Index'  (duration: 277.584µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T19:00:42.423281Z","caller":"traceutil/trace.go:171","msg":"trace[451620558] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"121.906275ms","start":"2024-01-03T19:00:42.301367Z","end":"2024-01-03T19:00:42.423273Z","steps":["trace[451620558] 'process raft request'  (duration: 121.515105ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:00:42.423536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.17983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2024-01-03T19:00:42.423614Z","caller":"traceutil/trace.go:171","msg":"trace[2076801830] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1156; }","duration":"108.259293ms","start":"2024-01-03T19:00:42.315337Z","end":"2024-01-03T19:00:42.423596Z","steps":["trace[2076801830] 'agreement among raft nodes before linearized reading'  (duration: 108.156321ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:01:22.534392Z","caller":"traceutil/trace.go:171","msg":"trace[1001643543] transaction","detail":"{read_only:false; response_revision:1455; number_of_response:1; }","duration":"274.497736ms","start":"2024-01-03T19:01:22.259849Z","end":"2024-01-03T19:01:22.534347Z","steps":["trace[1001643543] 'process raft request'  (duration: 274.012804ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:01:32.954757Z","caller":"traceutil/trace.go:171","msg":"trace[616151848] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"196.973016ms","start":"2024-01-03T19:01:32.75776Z","end":"2024-01-03T19:01:32.954733Z","steps":["trace[616151848] 'process raft request'  (duration: 196.858991ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:02:07.399786Z","caller":"traceutil/trace.go:171","msg":"trace[1902295285] transaction","detail":"{read_only:false; response_revision:1618; number_of_response:1; }","duration":"240.470636ms","start":"2024-01-03T19:02:07.159299Z","end":"2024-01-03T19:02:07.399769Z","steps":["trace[1902295285] 'process raft request'  (duration: 240.112931ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:02:07.400532Z","caller":"traceutil/trace.go:171","msg":"trace[914434381] transaction","detail":"{read_only:false; response_revision:1619; number_of_response:1; }","duration":"214.859169ms","start":"2024-01-03T19:02:07.185662Z","end":"2024-01-03T19:02:07.400522Z","steps":["trace[914434381] 'process raft request'  (duration: 214.785931ms)"],"step_count":1}
	
	
	==> gcp-auth [8fbc45be3f933d942336cf11de75e7f98e09833f6cd09f5a6bf2e88e2be067c7] <==
	2024/01/03 19:00:46 GCP Auth Webhook started!
	2024/01/03 19:00:55 Ready to marshal response ...
	2024/01/03 19:00:55 Ready to write response ...
	2024/01/03 19:00:55 Ready to marshal response ...
	2024/01/03 19:00:55 Ready to write response ...
	2024/01/03 19:01:07 Ready to marshal response ...
	2024/01/03 19:01:07 Ready to write response ...
	2024/01/03 19:01:09 Ready to marshal response ...
	2024/01/03 19:01:09 Ready to write response ...
	2024/01/03 19:01:09 Ready to marshal response ...
	2024/01/03 19:01:09 Ready to write response ...
	2024/01/03 19:01:18 Ready to marshal response ...
	2024/01/03 19:01:18 Ready to write response ...
	2024/01/03 19:01:26 Ready to marshal response ...
	2024/01/03 19:01:26 Ready to write response ...
	2024/01/03 19:01:26 Ready to marshal response ...
	2024/01/03 19:01:26 Ready to write response ...
	2024/01/03 19:01:26 Ready to marshal response ...
	2024/01/03 19:01:26 Ready to write response ...
	2024/01/03 19:01:52 Ready to marshal response ...
	2024/01/03 19:01:52 Ready to write response ...
	2024/01/03 19:02:21 Ready to marshal response ...
	2024/01/03 19:02:21 Ready to write response ...
	2024/01/03 19:03:33 Ready to marshal response ...
	2024/01/03 19:03:33 Ready to write response ...
	
	
	==> kernel <==
	 19:03:43 up 5 min,  0 users,  load average: 1.78, 2.08, 1.04
	Linux addons-848866 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8f11a2d4e0986d6f697a40b40255091126f5bdebb03e6cfc5d6e88114a46e44d] <==
	I0103 19:01:09.866581       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0103 19:01:10.068625       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.213.182"}
	I0103 19:01:26.895122       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.233.125"}
	I0103 19:01:55.755817       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0103 19:02:07.732265       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0103 19:02:37.780777       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:37.781055       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:37.795234       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:37.795338       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:37.827517       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:37.827626       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:37.828291       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:37.828378       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:37.847064       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:37.847126       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:37.847210       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:37.847254       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:37.859609       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:37.859697       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:37.873165       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:37.874327       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0103 19:02:38.829307       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0103 19:02:38.859754       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0103 19:02:38.904843       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0103 19:03:33.271288       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.174.80"}
	
	
	==> kube-controller-manager [ec0ebd78849bf140274d533657f699a8113c57aadbcca35b80fb7884971edcd7] <==
	W0103 19:02:58.988267       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:02:58.988359       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:03:00.564028       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:03:00.564137       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0103 19:03:07.544477       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0103 19:03:07.544544       1 shared_informer.go:318] Caches are synced for resource quota
	W0103 19:03:12.737810       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:03:12.737967       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:03:14.865470       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:03:14.865611       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:03:22.443627       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:03:22.443821       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0103 19:03:33.032475       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0103 19:03:33.066177       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-62spc"
	I0103 19:03:33.086044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.203909ms"
	I0103 19:03:33.102141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.962824ms"
	I0103 19:03:33.118452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.16963ms"
	I0103 19:03:33.118812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="124.507µs"
	W0103 19:03:33.788499       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:03:33.788642       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0103 19:03:35.674706       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0103 19:03:35.679602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="9.486µs"
	I0103 19:03:35.693969       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0103 19:03:36.477054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.025716ms"
	I0103 19:03:36.477142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.201µs"
	
	
	==> kube-proxy [6b741bb9b1640d9d305a4670e515b26640d5df2a035b72155215fc469eeac3d1] <==
	I0103 18:59:26.753971       1 server_others.go:69] "Using iptables proxy"
	I0103 18:59:26.940011       1 node.go:141] Successfully retrieved node IP: 192.168.39.253
	I0103 18:59:28.357142       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0103 18:59:28.357222       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 18:59:28.538035       1 server_others.go:152] "Using iptables Proxier"
	I0103 18:59:28.538100       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 18:59:28.538304       1 server.go:846] "Version info" version="v1.28.4"
	I0103 18:59:28.538314       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 18:59:28.626880       1 config.go:315] "Starting node config controller"
	I0103 18:59:28.627046       1 config.go:188] "Starting service config controller"
	I0103 18:59:28.627054       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 18:59:28.627113       1 config.go:97] "Starting endpoint slice config controller"
	I0103 18:59:28.627119       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 18:59:28.627607       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 18:59:28.729097       1 shared_informer.go:318] Caches are synced for node config
	I0103 18:59:28.729230       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 18:59:28.729335       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [67e832e826d41c6c845f9164efe972fc267a6e400a4dae895dbe55a591224657] <==
	W0103 18:58:53.332408       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 18:58:53.332518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0103 18:58:53.333663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0103 18:58:53.333715       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0103 18:58:53.359725       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0103 18:58:53.359771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0103 18:58:53.371132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 18:58:53.371181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0103 18:58:53.451946       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0103 18:58:53.452033       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0103 18:58:53.557320       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 18:58:53.557345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0103 18:58:53.612193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0103 18:58:53.612285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0103 18:58:53.646349       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 18:58:53.646436       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 18:58:53.666265       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0103 18:58:53.666347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0103 18:58:53.689587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0103 18:58:53.689670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0103 18:58:53.724093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 18:58:53.724183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0103 18:58:53.744230       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 18:58:53.744280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0103 18:58:55.808225       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 18:58:23 UTC, ends at Wed 2024-01-03 19:03:44 UTC. --
	Jan 03 19:03:33 addons-848866 kubelet[1241]: I0103 19:03:33.074437    1241 memory_manager.go:346] "RemoveStaleState removing state" podUID="e1004b45-c943-42c1-91ce-26e2c3896eb4" containerName="csi-attacher"
	Jan 03 19:03:33 addons-848866 kubelet[1241]: I0103 19:03:33.074442    1241 memory_manager.go:346] "RemoveStaleState removing state" podUID="26b5e328-52bd-4e75-8c8c-002908f82d63" containerName="task-pv-container"
	Jan 03 19:03:33 addons-848866 kubelet[1241]: I0103 19:03:33.238050    1241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6bb489f4-50a5-4948-9083-b18c3026149a-gcp-creds\") pod \"hello-world-app-5d77478584-62spc\" (UID: \"6bb489f4-50a5-4948-9083-b18c3026149a\") " pod="default/hello-world-app-5d77478584-62spc"
	Jan 03 19:03:33 addons-848866 kubelet[1241]: I0103 19:03:33.238103    1241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbmgr\" (UniqueName: \"kubernetes.io/projected/6bb489f4-50a5-4948-9083-b18c3026149a-kube-api-access-tbmgr\") pod \"hello-world-app-5d77478584-62spc\" (UID: \"6bb489f4-50a5-4948-9083-b18c3026149a\") " pod="default/hello-world-app-5d77478584-62spc"
	Jan 03 19:03:34 addons-848866 kubelet[1241]: I0103 19:03:34.437124    1241 scope.go:117] "RemoveContainer" containerID="efc8dc277a7279573ec3cd6d7eca3af5fcda2989dbd17f2b9e0e924885bb15f5"
	Jan 03 19:03:34 addons-848866 kubelet[1241]: I0103 19:03:34.458369    1241 scope.go:117] "RemoveContainer" containerID="efc8dc277a7279573ec3cd6d7eca3af5fcda2989dbd17f2b9e0e924885bb15f5"
	Jan 03 19:03:34 addons-848866 kubelet[1241]: E0103 19:03:34.459001    1241 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc8dc277a7279573ec3cd6d7eca3af5fcda2989dbd17f2b9e0e924885bb15f5\": container with ID starting with efc8dc277a7279573ec3cd6d7eca3af5fcda2989dbd17f2b9e0e924885bb15f5 not found: ID does not exist" containerID="efc8dc277a7279573ec3cd6d7eca3af5fcda2989dbd17f2b9e0e924885bb15f5"
	Jan 03 19:03:34 addons-848866 kubelet[1241]: I0103 19:03:34.459045    1241 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc8dc277a7279573ec3cd6d7eca3af5fcda2989dbd17f2b9e0e924885bb15f5"} err="failed to get container status \"efc8dc277a7279573ec3cd6d7eca3af5fcda2989dbd17f2b9e0e924885bb15f5\": rpc error: code = NotFound desc = could not find container \"efc8dc277a7279573ec3cd6d7eca3af5fcda2989dbd17f2b9e0e924885bb15f5\": container with ID starting with efc8dc277a7279573ec3cd6d7eca3af5fcda2989dbd17f2b9e0e924885bb15f5 not found: ID does not exist"
	Jan 03 19:03:34 addons-848866 kubelet[1241]: I0103 19:03:34.546364    1241 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj5wv\" (UniqueName: \"kubernetes.io/projected/7fb40f5c-ea06-451a-bf9d-4ccd66d89336-kube-api-access-pj5wv\") pod \"7fb40f5c-ea06-451a-bf9d-4ccd66d89336\" (UID: \"7fb40f5c-ea06-451a-bf9d-4ccd66d89336\") "
	Jan 03 19:03:34 addons-848866 kubelet[1241]: I0103 19:03:34.552232    1241 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb40f5c-ea06-451a-bf9d-4ccd66d89336-kube-api-access-pj5wv" (OuterVolumeSpecName: "kube-api-access-pj5wv") pod "7fb40f5c-ea06-451a-bf9d-4ccd66d89336" (UID: "7fb40f5c-ea06-451a-bf9d-4ccd66d89336"). InnerVolumeSpecName "kube-api-access-pj5wv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 03 19:03:34 addons-848866 kubelet[1241]: I0103 19:03:34.647396    1241 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pj5wv\" (UniqueName: \"kubernetes.io/projected/7fb40f5c-ea06-451a-bf9d-4ccd66d89336-kube-api-access-pj5wv\") on node \"addons-848866\" DevicePath \"\""
	Jan 03 19:03:35 addons-848866 kubelet[1241]: I0103 19:03:35.894847    1241 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="777eda28-9582-4636-9660-a3f6c02493d3" path="/var/lib/kubelet/pods/777eda28-9582-4636-9660-a3f6c02493d3/volumes"
	Jan 03 19:03:35 addons-848866 kubelet[1241]: I0103 19:03:35.895370    1241 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7fb40f5c-ea06-451a-bf9d-4ccd66d89336" path="/var/lib/kubelet/pods/7fb40f5c-ea06-451a-bf9d-4ccd66d89336/volumes"
	Jan 03 19:03:35 addons-848866 kubelet[1241]: I0103 19:03:35.895790    1241 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d5a1c0ea-fa65-4036-ae3d-9be627b91b6d" path="/var/lib/kubelet/pods/d5a1c0ea-fa65-4036-ae3d-9be627b91b6d/volumes"
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.082045    1241 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b58e2abe-841e-4690-aca4-4769a5333a4f-webhook-cert\") pod \"b58e2abe-841e-4690-aca4-4769a5333a4f\" (UID: \"b58e2abe-841e-4690-aca4-4769a5333a4f\") "
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.082106    1241 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls8wc\" (UniqueName: \"kubernetes.io/projected/b58e2abe-841e-4690-aca4-4769a5333a4f-kube-api-access-ls8wc\") pod \"b58e2abe-841e-4690-aca4-4769a5333a4f\" (UID: \"b58e2abe-841e-4690-aca4-4769a5333a4f\") "
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.084770    1241 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58e2abe-841e-4690-aca4-4769a5333a4f-kube-api-access-ls8wc" (OuterVolumeSpecName: "kube-api-access-ls8wc") pod "b58e2abe-841e-4690-aca4-4769a5333a4f" (UID: "b58e2abe-841e-4690-aca4-4769a5333a4f"). InnerVolumeSpecName "kube-api-access-ls8wc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.086067    1241 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e2abe-841e-4690-aca4-4769a5333a4f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b58e2abe-841e-4690-aca4-4769a5333a4f" (UID: "b58e2abe-841e-4690-aca4-4769a5333a4f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.182548    1241 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b58e2abe-841e-4690-aca4-4769a5333a4f-webhook-cert\") on node \"addons-848866\" DevicePath \"\""
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.182584    1241 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ls8wc\" (UniqueName: \"kubernetes.io/projected/b58e2abe-841e-4690-aca4-4769a5333a4f-kube-api-access-ls8wc\") on node \"addons-848866\" DevicePath \"\""
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.465704    1241 scope.go:117] "RemoveContainer" containerID="994f94047a2d4a1eef45aba4a43f5fea5953d26368b2d349f7453ba7a088d118"
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.492105    1241 scope.go:117] "RemoveContainer" containerID="994f94047a2d4a1eef45aba4a43f5fea5953d26368b2d349f7453ba7a088d118"
	Jan 03 19:03:39 addons-848866 kubelet[1241]: E0103 19:03:39.492740    1241 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994f94047a2d4a1eef45aba4a43f5fea5953d26368b2d349f7453ba7a088d118\": container with ID starting with 994f94047a2d4a1eef45aba4a43f5fea5953d26368b2d349f7453ba7a088d118 not found: ID does not exist" containerID="994f94047a2d4a1eef45aba4a43f5fea5953d26368b2d349f7453ba7a088d118"
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.492813    1241 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994f94047a2d4a1eef45aba4a43f5fea5953d26368b2d349f7453ba7a088d118"} err="failed to get container status \"994f94047a2d4a1eef45aba4a43f5fea5953d26368b2d349f7453ba7a088d118\": rpc error: code = NotFound desc = could not find container \"994f94047a2d4a1eef45aba4a43f5fea5953d26368b2d349f7453ba7a088d118\": container with ID starting with 994f94047a2d4a1eef45aba4a43f5fea5953d26368b2d349f7453ba7a088d118 not found: ID does not exist"
	Jan 03 19:03:39 addons-848866 kubelet[1241]: I0103 19:03:39.900778    1241 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b58e2abe-841e-4690-aca4-4769a5333a4f" path="/var/lib/kubelet/pods/b58e2abe-841e-4690-aca4-4769a5333a4f/volumes"
	
	
	==> storage-provisioner [1a9b0e585a735682eb7c82450daa50fa3f5e7663e970ee66a1618b2199238013] <==
	I0103 18:59:30.419687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 18:59:30.458546       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 18:59:30.458669       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 18:59:30.473682       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 18:59:30.482997       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-848866_1bda3b2f-40bf-44cd-8f0f-6076e0ff80b3!
	I0103 18:59:30.484581       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d38029d-9d29-4056-b4da-76963cd0ed2f", APIVersion:"v1", ResourceVersion:"894", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-848866_1bda3b2f-40bf-44cd-8f0f-6076e0ff80b3 became leader
	I0103 18:59:30.583558       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-848866_1bda3b2f-40bf-44cd-8f0f-6076e0ff80b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-848866 -n addons-848866
helpers_test.go:261: (dbg) Run:  kubectl --context addons-848866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-848866
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-848866: exit status 82 (2m0.933465986s)

                                                
                                                
-- stdout --
	* Stopping node "addons-848866"  ...
	* Stopping node "addons-848866"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-848866" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-848866
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-848866: exit status 11 (21.695416405s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-848866" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-848866
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-848866: exit status 11 (6.143587814s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-848866" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-848866
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-848866: exit status 11 (6.143156232s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-848866" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (172.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-736101 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-736101 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.650475238s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-736101 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-736101 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cfaa1997-2011-4cec-809e-dc7487596145] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cfaa1997-2011-4cec-809e-dc7487596145] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.004314803s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736101 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0103 19:15:48.653830   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:48.659388   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:48.669676   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:48.689951   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:48.730260   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:48.810617   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:48.971047   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:49.291620   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:49.932592   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:51.212826   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:53.773042   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:15:55.307744   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:15:58.894040   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:16:09.134617   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:16:22.995043   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:16:29.615211   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-736101 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.33842884s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-736101 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736101 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.191
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736101 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-736101 addons disable ingress-dns --alsologtostderr -v=1: (1.86866376s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736101 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-736101 addons disable ingress --alsologtostderr -v=1: (7.847533427s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-736101 -n ingress-addon-legacy-736101
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736101 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-736101 logs -n 25: (1.089726571s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount   | -p functional-166268                                                      | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC |                     |
	|         | --kill=true                                                               |                             |         |         |                     |                     |
	| start   | -p functional-166268                                                      | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC |                     |
	|         | --dry-run --memory                                                        |                             |         |         |                     |                     |
	|         | 250MB --alsologtostderr                                                   |                             |         |         |                     |                     |
	|         | --driver=kvm2                                                             |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| start   | -p functional-166268                                                      | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC |                     |
	|         | --dry-run --alsologtostderr                                               |                             |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| image   | functional-166268 image ls                                                | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	| image   | functional-166268 image save                                              | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-166268                  |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-166268 image rm                                                | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-166268                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-166268 image ls                                                | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	| image   | functional-166268 image load                                              | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	|         | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-166268 image ls                                                | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	| image   | functional-166268 image save --daemon                                     | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-166268                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-166268                                                         | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	|         | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh     | functional-166268 ssh pgrep                                               | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC |                     |
	|         | buildkitd                                                                 |                             |         |         |                     |                     |
	| image   | functional-166268                                                         | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	|         | image ls --format short                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-166268 image build -t                                          | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	|         | localhost/my-image:functional-166268                                      |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image   | functional-166268                                                         | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	|         | image ls --format json                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-166268                                                         | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	|         | image ls --format table                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-166268 image ls                                                | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	| delete  | -p functional-166268                                                      | functional-166268           | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:11 UTC |
	| start   | -p ingress-addon-legacy-736101                                            | ingress-addon-legacy-736101 | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC | 03 Jan 24 19:13 UTC |
	|         | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-736101                                               | ingress-addon-legacy-736101 | jenkins | v1.32.0 | 03 Jan 24 19:13 UTC | 03 Jan 24 19:14 UTC |
	|         | addons enable ingress                                                     |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-736101                                               | ingress-addon-legacy-736101 | jenkins | v1.32.0 | 03 Jan 24 19:14 UTC | 03 Jan 24 19:14 UTC |
	|         | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-736101                                               | ingress-addon-legacy-736101 | jenkins | v1.32.0 | 03 Jan 24 19:14 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-736101 ip                                            | ingress-addon-legacy-736101 | jenkins | v1.32.0 | 03 Jan 24 19:16 UTC | 03 Jan 24 19:16 UTC |
	| addons  | ingress-addon-legacy-736101                                               | ingress-addon-legacy-736101 | jenkins | v1.32.0 | 03 Jan 24 19:16 UTC | 03 Jan 24 19:16 UTC |
	|         | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-736101                                               | ingress-addon-legacy-736101 | jenkins | v1.32.0 | 03 Jan 24 19:16 UTC | 03 Jan 24 19:16 UTC |
	|         | addons disable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:11:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:11:47.038718   25720 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:11:47.038837   25720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:11:47.038848   25720 out.go:309] Setting ErrFile to fd 2...
	I0103 19:11:47.038852   25720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:11:47.039035   25720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:11:47.039635   25720 out.go:303] Setting JSON to false
	I0103 19:11:47.040506   25720 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3254,"bootTime":1704305853,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:11:47.040578   25720 start.go:138] virtualization: kvm guest
	I0103 19:11:47.043132   25720 out.go:177] * [ingress-addon-legacy-736101] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:11:47.045368   25720 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:11:47.045292   25720 notify.go:220] Checking for updates...
	I0103 19:11:47.047137   25720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:11:47.048780   25720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:11:47.050236   25720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:11:47.051717   25720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:11:47.053063   25720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:11:47.054659   25720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:11:47.090675   25720 out.go:177] * Using the kvm2 driver based on user configuration
	I0103 19:11:47.092174   25720 start.go:298] selected driver: kvm2
	I0103 19:11:47.092192   25720 start.go:902] validating driver "kvm2" against <nil>
	I0103 19:11:47.092203   25720 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:11:47.092928   25720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:11:47.092994   25720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 19:11:47.108039   25720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 19:11:47.108096   25720 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 19:11:47.108322   25720 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 19:11:47.108379   25720 cni.go:84] Creating CNI manager for ""
	I0103 19:11:47.108394   25720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 19:11:47.108405   25720 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0103 19:11:47.108415   25720 start_flags.go:323] config:
	{Name:ingress-addon-legacy-736101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-736101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:11:47.108545   25720 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:11:47.110492   25720 out.go:177] * Starting control plane node ingress-addon-legacy-736101 in cluster ingress-addon-legacy-736101
	I0103 19:11:47.111901   25720 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 19:11:47.544575   25720 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0103 19:11:47.544628   25720 cache.go:56] Caching tarball of preloaded images
	I0103 19:11:47.544820   25720 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 19:11:47.547053   25720 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0103 19:11:47.548765   25720 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0103 19:11:47.646714   25720 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0103 19:12:07.195659   25720 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0103 19:12:07.195749   25720 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0103 19:12:08.179627   25720 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0103 19:12:08.179955   25720 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/config.json ...
	I0103 19:12:08.179984   25720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/config.json: {Name:mkbf9aa7748191bf7661e8a63197b8b665ff1d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:12:08.180142   25720 start.go:365] acquiring machines lock for ingress-addon-legacy-736101: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:12:08.180175   25720 start.go:369] acquired machines lock for "ingress-addon-legacy-736101" in 16.868µs
	I0103 19:12:08.180191   25720 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-736101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-736101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:12:08.180261   25720 start.go:125] createHost starting for "" (driver="kvm2")
	I0103 19:12:08.182830   25720 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0103 19:12:08.182983   25720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:12:08.183016   25720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:12:08.197247   25720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I0103 19:12:08.197654   25720 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:12:08.198205   25720 main.go:141] libmachine: Using API Version  1
	I0103 19:12:08.198223   25720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:12:08.198616   25720 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:12:08.198833   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetMachineName
	I0103 19:12:08.199018   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:12:08.199221   25720 start.go:159] libmachine.API.Create for "ingress-addon-legacy-736101" (driver="kvm2")
	I0103 19:12:08.199243   25720 client.go:168] LocalClient.Create starting
	I0103 19:12:08.199270   25720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem
	I0103 19:12:08.199299   25720 main.go:141] libmachine: Decoding PEM data...
	I0103 19:12:08.199314   25720 main.go:141] libmachine: Parsing certificate...
	I0103 19:12:08.199415   25720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem
	I0103 19:12:08.199455   25720 main.go:141] libmachine: Decoding PEM data...
	I0103 19:12:08.199468   25720 main.go:141] libmachine: Parsing certificate...
	I0103 19:12:08.199491   25720 main.go:141] libmachine: Running pre-create checks...
	I0103 19:12:08.199502   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .PreCreateCheck
	I0103 19:12:08.199855   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetConfigRaw
	I0103 19:12:08.200234   25720 main.go:141] libmachine: Creating machine...
	I0103 19:12:08.200249   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .Create
	I0103 19:12:08.200377   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Creating KVM machine...
	I0103 19:12:08.201714   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found existing default KVM network
	I0103 19:12:08.202403   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:08.202244   25789 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I0103 19:12:08.208228   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | trying to create private KVM network mk-ingress-addon-legacy-736101 192.168.39.0/24...
	I0103 19:12:08.279720   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | private KVM network mk-ingress-addon-legacy-736101 192.168.39.0/24 created
	I0103 19:12:08.279756   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:08.279648   25789 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:12:08.279779   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Setting up store path in /home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101 ...
	I0103 19:12:08.279801   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Building disk image from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 19:12:08.279822   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Downloading /home/jenkins/minikube-integration/17885-9609/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0103 19:12:08.485688   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:08.485525   25789 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/id_rsa...
	I0103 19:12:08.650132   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:08.649977   25789 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/ingress-addon-legacy-736101.rawdisk...
	I0103 19:12:08.650175   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Writing magic tar header
	I0103 19:12:08.650190   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Writing SSH key tar header
	I0103 19:12:08.650201   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:08.650091   25789 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101 ...
	I0103 19:12:08.650213   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101
	I0103 19:12:08.650221   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines
	I0103 19:12:08.650230   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101 (perms=drwx------)
	I0103 19:12:08.650239   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines (perms=drwxr-xr-x)
	I0103 19:12:08.650248   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube (perms=drwxr-xr-x)
	I0103 19:12:08.650255   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:12:08.650267   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609
	I0103 19:12:08.650281   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0103 19:12:08.650289   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Checking permissions on dir: /home/jenkins
	I0103 19:12:08.650297   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Checking permissions on dir: /home
	I0103 19:12:08.650307   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609 (perms=drwxrwxr-x)
	I0103 19:12:08.650316   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Skipping /home - not owner
	I0103 19:12:08.650326   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0103 19:12:08.650335   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0103 19:12:08.650343   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Creating domain...
	I0103 19:12:08.651971   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) define libvirt domain using xml: 
	I0103 19:12:08.652004   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) <domain type='kvm'>
	I0103 19:12:08.652018   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   <name>ingress-addon-legacy-736101</name>
	I0103 19:12:08.652032   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   <memory unit='MiB'>4096</memory>
	I0103 19:12:08.652062   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   <vcpu>2</vcpu>
	I0103 19:12:08.652075   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   <features>
	I0103 19:12:08.652086   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <acpi/>
	I0103 19:12:08.652102   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <apic/>
	I0103 19:12:08.652133   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <pae/>
	I0103 19:12:08.652159   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     
	I0103 19:12:08.652175   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   </features>
	I0103 19:12:08.652192   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   <cpu mode='host-passthrough'>
	I0103 19:12:08.652205   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   
	I0103 19:12:08.652215   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   </cpu>
	I0103 19:12:08.652228   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   <os>
	I0103 19:12:08.652243   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <type>hvm</type>
	I0103 19:12:08.652256   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <boot dev='cdrom'/>
	I0103 19:12:08.652266   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <boot dev='hd'/>
	I0103 19:12:08.652277   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <bootmenu enable='no'/>
	I0103 19:12:08.652290   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   </os>
	I0103 19:12:08.652328   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   <devices>
	I0103 19:12:08.652355   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <disk type='file' device='cdrom'>
	I0103 19:12:08.652378   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/boot2docker.iso'/>
	I0103 19:12:08.652399   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <target dev='hdc' bus='scsi'/>
	I0103 19:12:08.652415   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <readonly/>
	I0103 19:12:08.652423   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     </disk>
	I0103 19:12:08.652431   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <disk type='file' device='disk'>
	I0103 19:12:08.652440   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0103 19:12:08.652450   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/ingress-addon-legacy-736101.rawdisk'/>
	I0103 19:12:08.652458   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <target dev='hda' bus='virtio'/>
	I0103 19:12:08.652464   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     </disk>
	I0103 19:12:08.652473   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <interface type='network'>
	I0103 19:12:08.652480   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <source network='mk-ingress-addon-legacy-736101'/>
	I0103 19:12:08.652486   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <model type='virtio'/>
	I0103 19:12:08.652493   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     </interface>
	I0103 19:12:08.652506   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <interface type='network'>
	I0103 19:12:08.652520   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <source network='default'/>
	I0103 19:12:08.652529   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <model type='virtio'/>
	I0103 19:12:08.652550   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     </interface>
	I0103 19:12:08.652563   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <serial type='pty'>
	I0103 19:12:08.652578   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <target port='0'/>
	I0103 19:12:08.652598   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     </serial>
	I0103 19:12:08.652611   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <console type='pty'>
	I0103 19:12:08.652628   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <target type='serial' port='0'/>
	I0103 19:12:08.652642   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     </console>
	I0103 19:12:08.652653   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     <rng model='virtio'>
	I0103 19:12:08.652666   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)       <backend model='random'>/dev/random</backend>
	I0103 19:12:08.652678   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     </rng>
	I0103 19:12:08.652688   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     
	I0103 19:12:08.652703   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)     
	I0103 19:12:08.652717   25720 main.go:141] libmachine: (ingress-addon-legacy-736101)   </devices>
	I0103 19:12:08.652726   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) </domain>
	I0103 19:12:08.652743   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) 
	I0103 19:12:08.657563   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:cd:69:1c in network default
	I0103 19:12:08.658313   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Ensuring networks are active...
	I0103 19:12:08.658359   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:08.659135   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Ensuring network default is active
	I0103 19:12:08.659515   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Ensuring network mk-ingress-addon-legacy-736101 is active
	I0103 19:12:08.660225   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Getting domain xml...
	I0103 19:12:08.661046   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Creating domain...
	I0103 19:12:09.908118   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Waiting to get IP...
	I0103 19:12:09.908725   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:09.909106   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:09.909187   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:09.909089   25789 retry.go:31] will retry after 242.999875ms: waiting for machine to come up
	I0103 19:12:10.153562   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:10.153957   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:10.153990   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:10.153910   25789 retry.go:31] will retry after 247.831598ms: waiting for machine to come up
	I0103 19:12:10.403410   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:10.403839   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:10.403880   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:10.403787   25789 retry.go:31] will retry after 401.501454ms: waiting for machine to come up
	I0103 19:12:10.806395   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:10.806854   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:10.806880   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:10.806805   25789 retry.go:31] will retry after 544.869488ms: waiting for machine to come up
	I0103 19:12:11.353789   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:11.354286   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:11.354316   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:11.354225   25789 retry.go:31] will retry after 599.344006ms: waiting for machine to come up
	I0103 19:12:11.955167   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:11.955676   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:11.955709   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:11.955619   25789 retry.go:31] will retry after 636.711142ms: waiting for machine to come up
	I0103 19:12:12.593705   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:12.594207   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:12.594234   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:12.594110   25789 retry.go:31] will retry after 966.246769ms: waiting for machine to come up
	I0103 19:12:13.561560   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:13.561922   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:13.561951   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:13.561909   25789 retry.go:31] will retry after 1.402946275s: waiting for machine to come up
	I0103 19:12:14.966793   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:14.967257   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:14.967283   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:14.967216   25789 retry.go:31] will retry after 1.499229133s: waiting for machine to come up
	I0103 19:12:16.468942   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:16.469428   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:16.469459   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:16.469381   25789 retry.go:31] will retry after 1.763434669s: waiting for machine to come up
	I0103 19:12:18.234438   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:18.234900   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:18.234928   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:18.234861   25789 retry.go:31] will retry after 2.213404159s: waiting for machine to come up
	I0103 19:12:20.451302   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:20.451762   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:20.451793   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:20.451705   25789 retry.go:31] will retry after 3.461550874s: waiting for machine to come up
	I0103 19:12:23.914642   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:23.915051   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:23.915076   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:23.915021   25789 retry.go:31] will retry after 3.143670024s: waiting for machine to come up
	I0103 19:12:27.062390   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:27.062790   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find current IP address of domain ingress-addon-legacy-736101 in network mk-ingress-addon-legacy-736101
	I0103 19:12:27.062815   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | I0103 19:12:27.062754   25789 retry.go:31] will retry after 4.082833914s: waiting for machine to come up
	I0103 19:12:31.149360   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.149787   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Found IP for machine: 192.168.39.191
	I0103 19:12:31.149807   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Reserving static IP address...
	I0103 19:12:31.149821   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has current primary IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.150421   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-736101", mac: "52:54:00:01:96:c9", ip: "192.168.39.191"} in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.225194   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Getting to WaitForSSH function...
	I0103 19:12:31.225225   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Reserved static IP address: 192.168.39.191
	I0103 19:12:31.225240   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Waiting for SSH to be available...
	I0103 19:12:31.227456   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.227775   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:minikube Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:31.227810   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.227904   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Using SSH client type: external
	I0103 19:12:31.227936   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/id_rsa (-rw-------)
	I0103 19:12:31.227970   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 19:12:31.227993   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | About to run SSH command:
	I0103 19:12:31.228021   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | exit 0
	I0103 19:12:31.318593   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | SSH cmd err, output: <nil>: 
	I0103 19:12:31.318844   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) KVM machine creation complete!
	I0103 19:12:31.319124   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetConfigRaw
	I0103 19:12:31.319631   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:12:31.319814   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:12:31.319970   25720 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0103 19:12:31.319985   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetState
	I0103 19:12:31.321574   25720 main.go:141] libmachine: Detecting operating system of created instance...
	I0103 19:12:31.321588   25720 main.go:141] libmachine: Waiting for SSH to be available...
	I0103 19:12:31.321594   25720 main.go:141] libmachine: Getting to WaitForSSH function...
	I0103 19:12:31.321601   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:31.324143   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.324492   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:31.324524   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.324651   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:31.324830   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:31.324970   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:31.325131   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:31.325318   25720 main.go:141] libmachine: Using SSH client type: native
	I0103 19:12:31.325804   25720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:12:31.325821   25720 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0103 19:12:31.441831   25720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:12:31.441857   25720 main.go:141] libmachine: Detecting the provisioner...
	I0103 19:12:31.441866   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:31.444496   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.444903   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:31.444932   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.445020   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:31.445231   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:31.445391   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:31.445544   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:31.445754   25720 main.go:141] libmachine: Using SSH client type: native
	I0103 19:12:31.446114   25720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:12:31.446127   25720 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0103 19:12:31.562948   25720 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0103 19:12:31.563016   25720 main.go:141] libmachine: found compatible host: buildroot
	I0103 19:12:31.563023   25720 main.go:141] libmachine: Provisioning with buildroot...
	I0103 19:12:31.563031   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetMachineName
	I0103 19:12:31.563322   25720 buildroot.go:166] provisioning hostname "ingress-addon-legacy-736101"
	I0103 19:12:31.563354   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetMachineName
	I0103 19:12:31.563557   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:31.566322   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.566795   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:31.566826   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.566976   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:31.567195   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:31.567363   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:31.567538   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:31.567710   25720 main.go:141] libmachine: Using SSH client type: native
	I0103 19:12:31.568179   25720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:12:31.568201   25720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-736101 && echo "ingress-addon-legacy-736101" | sudo tee /etc/hostname
	I0103 19:12:31.694220   25720 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-736101
	
	I0103 19:12:31.694258   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:31.696931   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.697370   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:31.697396   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.697627   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:31.697813   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:31.697961   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:31.698148   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:31.698314   25720 main.go:141] libmachine: Using SSH client type: native
	I0103 19:12:31.698718   25720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:12:31.698741   25720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-736101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-736101/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-736101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:12:31.821160   25720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:12:31.821190   25720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 19:12:31.821210   25720 buildroot.go:174] setting up certificates
	I0103 19:12:31.821220   25720 provision.go:83] configureAuth start
	I0103 19:12:31.821230   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetMachineName
	I0103 19:12:31.821497   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetIP
	I0103 19:12:31.824001   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.824320   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:31.824355   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.824525   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:31.826922   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.827227   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:31.827254   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:31.827393   25720 provision.go:138] copyHostCerts
	I0103 19:12:31.827422   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:12:31.827449   25720 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 19:12:31.827464   25720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:12:31.827531   25720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 19:12:31.827619   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:12:31.827644   25720 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 19:12:31.827654   25720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:12:31.827689   25720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 19:12:31.827736   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:12:31.827752   25720 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 19:12:31.827758   25720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:12:31.827781   25720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 19:12:31.827824   25720 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-736101 san=[192.168.39.191 192.168.39.191 localhost 127.0.0.1 minikube ingress-addon-legacy-736101]
	I0103 19:12:32.109729   25720 provision.go:172] copyRemoteCerts
	I0103 19:12:32.109785   25720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:12:32.109808   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:32.112709   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.113044   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:32.113073   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.113266   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:32.113503   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:32.113651   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:32.113766   25720 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/id_rsa Username:docker}
	I0103 19:12:32.200207   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 19:12:32.200306   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:12:32.222532   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 19:12:32.222597   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0103 19:12:32.242279   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 19:12:32.242338   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 19:12:32.262795   25720 provision.go:86] duration metric: configureAuth took 441.564193ms
	I0103 19:12:32.262822   25720 buildroot.go:189] setting minikube options for container-runtime
	I0103 19:12:32.262990   25720 config.go:182] Loaded profile config "ingress-addon-legacy-736101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0103 19:12:32.263072   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:32.265825   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.266177   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:32.266205   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.266351   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:32.266603   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:32.266760   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:32.266903   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:32.267045   25720 main.go:141] libmachine: Using SSH client type: native
	I0103 19:12:32.267357   25720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:12:32.267373   25720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:12:32.554499   25720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:12:32.554541   25720 main.go:141] libmachine: Checking connection to Docker...
	I0103 19:12:32.554553   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetURL
	I0103 19:12:32.555699   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Using libvirt version 6000000
	I0103 19:12:32.557843   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.558126   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:32.558160   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.558269   25720 main.go:141] libmachine: Docker is up and running!
	I0103 19:12:32.558288   25720 main.go:141] libmachine: Reticulating splines...
	I0103 19:12:32.558297   25720 client.go:171] LocalClient.Create took 24.359046017s
	I0103 19:12:32.558321   25720 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-736101" took 24.359100794s
	I0103 19:12:32.558334   25720 start.go:300] post-start starting for "ingress-addon-legacy-736101" (driver="kvm2")
	I0103 19:12:32.558349   25720 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:12:32.558371   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:12:32.558667   25720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:12:32.558693   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:32.560520   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.560788   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:32.560820   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.560980   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:32.561144   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:32.561278   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:32.561399   25720 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/id_rsa Username:docker}
	I0103 19:12:32.647249   25720 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:12:32.651311   25720 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 19:12:32.651342   25720 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 19:12:32.651419   25720 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 19:12:32.651527   25720 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 19:12:32.651541   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0103 19:12:32.651628   25720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:12:32.659471   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:12:32.683417   25720 start.go:303] post-start completed in 125.057156ms
	I0103 19:12:32.683537   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetConfigRaw
	I0103 19:12:32.684782   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetIP
	I0103 19:12:32.687614   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.687932   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:32.687962   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.688237   25720 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/config.json ...
	I0103 19:12:32.688426   25720 start.go:128] duration metric: createHost completed in 24.508155409s
	I0103 19:12:32.688448   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:32.690582   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.690912   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:32.690940   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.691092   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:32.691268   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:32.691401   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:32.691519   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:32.691680   25720 main.go:141] libmachine: Using SSH client type: native
	I0103 19:12:32.692009   25720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:12:32.692024   25720 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 19:12:32.807080   25720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704309152.780812896
	
	I0103 19:12:32.807102   25720 fix.go:206] guest clock: 1704309152.780812896
	I0103 19:12:32.807111   25720 fix.go:219] Guest: 2024-01-03 19:12:32.780812896 +0000 UTC Remote: 2024-01-03 19:12:32.688437143 +0000 UTC m=+45.698940038 (delta=92.375753ms)
	I0103 19:12:32.807132   25720 fix.go:190] guest clock delta is within tolerance: 92.375753ms
	I0103 19:12:32.807138   25720 start.go:83] releasing machines lock for "ingress-addon-legacy-736101", held for 24.626957147s
	I0103 19:12:32.807164   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:12:32.807471   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetIP
	I0103 19:12:32.809933   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.810209   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:32.810250   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.810398   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:12:32.810875   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:12:32.811049   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:12:32.811135   25720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:12:32.811185   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:32.811383   25720 ssh_runner.go:195] Run: cat /version.json
	I0103 19:12:32.811409   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:12:32.813964   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.813994   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.814317   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:32.814356   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.814383   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:32.814397   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:32.814488   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:32.814609   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:12:32.814693   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:32.814796   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:12:32.814856   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:32.814943   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:12:32.815036   25720 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/id_rsa Username:docker}
	I0103 19:12:32.815171   25720 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/id_rsa Username:docker}
	I0103 19:12:32.894893   25720 ssh_runner.go:195] Run: systemctl --version
	I0103 19:12:32.936407   25720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:12:33.092275   25720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 19:12:33.098357   25720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 19:12:33.098437   25720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:12:33.111713   25720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 19:12:33.111734   25720 start.go:475] detecting cgroup driver to use...
	I0103 19:12:33.111803   25720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:12:33.123567   25720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:12:33.135192   25720 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:12:33.135268   25720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:12:33.146856   25720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:12:33.158568   25720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:12:33.259577   25720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:12:33.380751   25720 docker.go:219] disabling docker service ...
	I0103 19:12:33.380833   25720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:12:33.393483   25720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:12:33.405339   25720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:12:33.519008   25720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:12:33.630821   25720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:12:33.642364   25720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:12:33.658202   25720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0103 19:12:33.658262   25720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:12:33.666551   25720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:12:33.666612   25720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:12:33.675108   25720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:12:33.683562   25720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:12:33.692068   25720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:12:33.701038   25720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:12:33.708716   25720 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 19:12:33.708795   25720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 19:12:33.720187   25720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:12:33.728263   25720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:12:33.836434   25720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:12:33.994837   25720 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:12:33.994899   25720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:12:33.999248   25720 start.go:543] Will wait 60s for crictl version
	I0103 19:12:33.999295   25720 ssh_runner.go:195] Run: which crictl
	I0103 19:12:34.002816   25720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:12:34.037023   25720 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 19:12:34.037121   25720 ssh_runner.go:195] Run: crio --version
	I0103 19:12:34.086277   25720 ssh_runner.go:195] Run: crio --version
	I0103 19:12:34.130042   25720 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0103 19:12:34.131615   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetIP
	I0103 19:12:34.134176   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:34.134610   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:12:34.134637   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:12:34.134870   25720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 19:12:34.139005   25720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:12:34.150918   25720 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 19:12:34.150973   25720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:12:34.184495   25720 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0103 19:12:34.184557   25720 ssh_runner.go:195] Run: which lz4
	I0103 19:12:34.188321   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0103 19:12:34.188414   25720 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 19:12:34.192348   25720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 19:12:34.192384   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0103 19:12:35.931417   25720 crio.go:444] Took 1.743029 seconds to copy over tarball
	I0103 19:12:35.931488   25720 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 19:12:38.846182   25720 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914672848s)
	I0103 19:12:38.846206   25720 crio.go:451] Took 2.914765 seconds to extract the tarball
	I0103 19:12:38.846214   25720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 19:12:38.887938   25720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:12:38.946426   25720 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0103 19:12:38.946458   25720 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 19:12:38.946546   25720 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:12:38.946564   25720 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:12:38.946583   25720 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0103 19:12:38.946594   25720 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:12:38.946667   25720 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0103 19:12:38.946817   25720 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:12:38.946829   25720 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:12:38.946862   25720 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:12:38.947870   25720 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0103 19:12:38.947898   25720 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:12:38.947919   25720 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0103 19:12:38.947945   25720 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:12:38.947967   25720 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:12:38.947900   25720 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:12:38.948104   25720 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:12:38.948112   25720 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:12:39.192072   25720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:12:39.201849   25720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0103 19:12:39.208499   25720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:12:39.217909   25720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:12:39.218348   25720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0103 19:12:39.246213   25720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:12:39.251374   25720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0103 19:12:39.252669   25720 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0103 19:12:39.252712   25720 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:12:39.252749   25720 ssh_runner.go:195] Run: which crictl
	I0103 19:12:39.316650   25720 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0103 19:12:39.316698   25720 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0103 19:12:39.316762   25720 ssh_runner.go:195] Run: which crictl
	I0103 19:12:39.354851   25720 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0103 19:12:39.354896   25720 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:12:39.354944   25720 ssh_runner.go:195] Run: which crictl
	I0103 19:12:39.378312   25720 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0103 19:12:39.378358   25720 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:12:39.378369   25720 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0103 19:12:39.378400   25720 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:12:39.378418   25720 ssh_runner.go:195] Run: which crictl
	I0103 19:12:39.378438   25720 ssh_runner.go:195] Run: which crictl
	I0103 19:12:39.383963   25720 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0103 19:12:39.384004   25720 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:12:39.384051   25720 ssh_runner.go:195] Run: which crictl
	I0103 19:12:39.393859   25720 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0103 19:12:39.393897   25720 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0103 19:12:39.393906   25720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:12:39.393929   25720 ssh_runner.go:195] Run: which crictl
	I0103 19:12:39.393914   25720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0103 19:12:39.393958   25720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:12:39.393980   25720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:12:39.394010   25720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0103 19:12:39.394037   25720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:12:39.523584   25720 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0103 19:12:39.523631   25720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0103 19:12:39.523646   25720 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0103 19:12:39.523686   25720 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0103 19:12:39.523722   25720 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0103 19:12:39.523774   25720 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0103 19:12:39.527927   25720 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0103 19:12:39.562155   25720 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0103 19:12:39.797387   25720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:12:39.942876   25720 cache_images.go:92] LoadImages completed in 996.400653ms
	W0103 19:12:39.942945   25720 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0103 19:12:39.943033   25720 ssh_runner.go:195] Run: crio config
	I0103 19:12:40.005610   25720 cni.go:84] Creating CNI manager for ""
	I0103 19:12:40.005627   25720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 19:12:40.005643   25720 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:12:40.005664   25720 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.191 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-736101 NodeName:ingress-addon-legacy-736101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 19:12:40.005791   25720 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-736101"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:12:40.005863   25720 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-736101 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-736101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:12:40.005914   25720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0103 19:12:40.015268   25720 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:12:40.015389   25720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 19:12:40.024510   25720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0103 19:12:40.040323   25720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0103 19:12:40.055521   25720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0103 19:12:40.071122   25720 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I0103 19:12:40.074668   25720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:12:40.085304   25720 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101 for IP: 192.168.39.191
	I0103 19:12:40.085338   25720 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:12:40.085508   25720 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 19:12:40.085573   25720 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 19:12:40.085629   25720 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.key
	I0103 19:12:40.085641   25720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt with IP's: []
	I0103 19:12:40.204016   25720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt ...
	I0103 19:12:40.204046   25720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: {Name:mkea3df8bcd73ec52e15cc6fb8e8dd8e6dffa149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:12:40.204239   25720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.key ...
	I0103 19:12:40.204256   25720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.key: {Name:mk8e4f4ad81ccfbf5b12d2356558223755fea0a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:12:40.204358   25720 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.key.6f081b7d
	I0103 19:12:40.204376   25720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.crt.6f081b7d with IP's: [192.168.39.191 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 19:12:40.306374   25720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.crt.6f081b7d ...
	I0103 19:12:40.306401   25720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.crt.6f081b7d: {Name:mk99efa1971555527e27089236de77c8c21d0fb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:12:40.306574   25720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.key.6f081b7d ...
	I0103 19:12:40.306592   25720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.key.6f081b7d: {Name:mk3afd5a845b3dcc44e8670e0d680e9f2fdacb61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:12:40.306717   25720 certs.go:337] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.crt.6f081b7d -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.crt
	I0103 19:12:40.306799   25720 certs.go:341] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.key.6f081b7d -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.key
	I0103 19:12:40.306849   25720 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.key
	I0103 19:12:40.306862   25720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.crt with IP's: []
	I0103 19:12:40.580596   25720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.crt ...
	I0103 19:12:40.580625   25720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.crt: {Name:mk285a73ccc972aed98e59231316ee04c0499b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:12:40.580802   25720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.key ...
	I0103 19:12:40.580821   25720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.key: {Name:mk5275fab6b325b5dd42476c3e6c916eaee0cb9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:12:40.580917   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0103 19:12:40.580937   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0103 19:12:40.580948   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0103 19:12:40.580960   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0103 19:12:40.580970   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 19:12:40.580982   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 19:12:40.580992   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 19:12:40.581005   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 19:12:40.581076   25720 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 19:12:40.581111   25720 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 19:12:40.581121   25720 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:12:40.581144   25720 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:12:40.581166   25720 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:12:40.581190   25720 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 19:12:40.581227   25720 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:12:40.581261   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:12:40.581281   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0103 19:12:40.581293   25720 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0103 19:12:40.581897   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 19:12:40.605183   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 19:12:40.626707   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 19:12:40.647127   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 19:12:40.668427   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:12:40.688397   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 19:12:40.711320   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:12:40.732294   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 19:12:40.753044   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:12:40.774173   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 19:12:40.796802   25720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 19:12:40.819908   25720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 19:12:40.835679   25720 ssh_runner.go:195] Run: openssl version
	I0103 19:12:40.841087   25720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:12:40.851308   25720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:12:40.855477   25720 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:12:40.855558   25720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:12:40.860688   25720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:12:40.870719   25720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 19:12:40.880522   25720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 19:12:40.884578   25720 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:12:40.884635   25720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 19:12:40.889897   25720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 19:12:40.899650   25720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 19:12:40.909706   25720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 19:12:40.913960   25720 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:12:40.914015   25720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 19:12:40.919269   25720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:12:40.929580   25720 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:12:40.933511   25720 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:12:40.933573   25720 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-736101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-736101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:12:40.933676   25720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 19:12:40.933738   25720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 19:12:40.968589   25720 cri.go:89] found id: ""
	I0103 19:12:40.968673   25720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 19:12:40.977928   25720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 19:12:40.986529   25720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 19:12:40.995457   25720 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 19:12:40.995498   25720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0103 19:12:41.050571   25720 kubeadm.go:322] W0103 19:12:41.033896     959 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0103 19:12:41.169693   25720 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 19:12:44.215457   25720 kubeadm.go:322] W0103 19:12:44.201505     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 19:12:44.216720   25720 kubeadm.go:322] W0103 19:12:44.202780     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 19:12:55.217774   25720 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0103 19:12:55.217866   25720 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 19:12:55.217949   25720 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 19:12:55.218064   25720 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 19:12:55.218197   25720 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 19:12:55.218350   25720 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:12:55.218476   25720 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:12:55.218551   25720 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 19:12:55.218668   25720 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 19:12:55.220702   25720 out.go:204]   - Generating certificates and keys ...
	I0103 19:12:55.220795   25720 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 19:12:55.220866   25720 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 19:12:55.220976   25720 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 19:12:55.221074   25720 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 19:12:55.221170   25720 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 19:12:55.221246   25720 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 19:12:55.221322   25720 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 19:12:55.221529   25720 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-736101 localhost] and IPs [192.168.39.191 127.0.0.1 ::1]
	I0103 19:12:55.221606   25720 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 19:12:55.221775   25720 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-736101 localhost] and IPs [192.168.39.191 127.0.0.1 ::1]
	I0103 19:12:55.221843   25720 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 19:12:55.221915   25720 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 19:12:55.221986   25720 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 19:12:55.222062   25720 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 19:12:55.222135   25720 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 19:12:55.222189   25720 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 19:12:55.222279   25720 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 19:12:55.222360   25720 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 19:12:55.222462   25720 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 19:12:55.223890   25720 out.go:204]   - Booting up control plane ...
	I0103 19:12:55.223996   25720 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 19:12:55.224099   25720 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 19:12:55.224214   25720 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 19:12:55.224310   25720 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 19:12:55.224506   25720 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 19:12:55.224606   25720 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503398 seconds
	I0103 19:12:55.224737   25720 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 19:12:55.224915   25720 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 19:12:55.224999   25720 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 19:12:55.225158   25720 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-736101 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0103 19:12:55.225232   25720 kubeadm.go:322] [bootstrap-token] Using token: kgpm2c.m263bjgvywzitq82
	I0103 19:12:55.226824   25720 out.go:204]   - Configuring RBAC rules ...
	I0103 19:12:55.226926   25720 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 19:12:55.227036   25720 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 19:12:55.227155   25720 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 19:12:55.227272   25720 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 19:12:55.227371   25720 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 19:12:55.227440   25720 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 19:12:55.227567   25720 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 19:12:55.227632   25720 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 19:12:55.227706   25720 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 19:12:55.227715   25720 kubeadm.go:322] 
	I0103 19:12:55.227803   25720 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 19:12:55.227817   25720 kubeadm.go:322] 
	I0103 19:12:55.227907   25720 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 19:12:55.227916   25720 kubeadm.go:322] 
	I0103 19:12:55.227950   25720 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 19:12:55.228028   25720 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 19:12:55.228082   25720 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 19:12:55.228091   25720 kubeadm.go:322] 
	I0103 19:12:55.228135   25720 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 19:12:55.228215   25720 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 19:12:55.228281   25720 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 19:12:55.228287   25720 kubeadm.go:322] 
	I0103 19:12:55.228361   25720 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 19:12:55.228428   25720 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 19:12:55.228433   25720 kubeadm.go:322] 
	I0103 19:12:55.228502   25720 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kgpm2c.m263bjgvywzitq82 \
	I0103 19:12:55.228588   25720 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 \
	I0103 19:12:55.228610   25720 kubeadm.go:322]     --control-plane 
	I0103 19:12:55.228616   25720 kubeadm.go:322] 
	I0103 19:12:55.228689   25720 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 19:12:55.228698   25720 kubeadm.go:322] 
	I0103 19:12:55.228762   25720 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kgpm2c.m263bjgvywzitq82 \
	I0103 19:12:55.228905   25720 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 
	I0103 19:12:55.228921   25720 cni.go:84] Creating CNI manager for ""
	I0103 19:12:55.228928   25720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 19:12:55.230569   25720 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 19:12:55.231901   25720 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 19:12:55.244686   25720 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 19:12:55.261582   25720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 19:12:55.261680   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:55.261682   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=ingress-addon-legacy-736101 minikube.k8s.io/updated_at=2024_01_03T19_12_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:55.579393   25720 ops.go:34] apiserver oom_adj: -16
	I0103 19:12:55.579612   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:56.080154   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:56.580417   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:57.080348   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:57.579933   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:58.079691   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:58.580541   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:59.080548   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:12:59.579797   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:00.080379   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:00.580656   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:01.080655   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:01.579680   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:02.079668   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:02.580060   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:03.080389   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:03.580176   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:04.080502   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:04.580544   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:05.080296   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:05.580243   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:06.080242   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:06.580050   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:07.080556   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:07.579890   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:08.080239   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:08.580561   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:09.080624   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:09.580014   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:10.080481   25720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:13:10.177977   25720 kubeadm.go:1088] duration metric: took 14.916375383s to wait for elevateKubeSystemPrivileges.
	I0103 19:13:10.178021   25720 kubeadm.go:406] StartCluster complete in 29.244455187s
	I0103 19:13:10.178043   25720 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:13:10.178163   25720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:13:10.179162   25720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:13:10.179390   25720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 19:13:10.179559   25720 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 19:13:10.179644   25720 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-736101"
	I0103 19:13:10.179660   25720 config.go:182] Loaded profile config "ingress-addon-legacy-736101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0103 19:13:10.179665   25720 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-736101"
	I0103 19:13:10.179783   25720 host.go:66] Checking if "ingress-addon-legacy-736101" exists ...
	I0103 19:13:10.179667   25720 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-736101"
	I0103 19:13:10.179828   25720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-736101"
	I0103 19:13:10.180158   25720 kapi.go:59] client config for ingress-addon-legacy-736101: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:13:10.180272   25720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:13:10.180298   25720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:13:10.180275   25720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:13:10.180380   25720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:13:10.180885   25720 cert_rotation.go:137] Starting client certificate rotation controller
	I0103 19:13:10.195729   25720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41077
	I0103 19:13:10.196107   25720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0103 19:13:10.196329   25720 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:13:10.196702   25720 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:13:10.196912   25720 main.go:141] libmachine: Using API Version  1
	I0103 19:13:10.196935   25720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:13:10.197234   25720 main.go:141] libmachine: Using API Version  1
	I0103 19:13:10.197254   25720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:13:10.197288   25720 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:13:10.197593   25720 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:13:10.197763   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetState
	I0103 19:13:10.197915   25720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:13:10.197969   25720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:13:10.200657   25720 kapi.go:59] client config for ingress-addon-legacy-736101: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:13:10.200976   25720 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-736101"
	I0103 19:13:10.201010   25720 host.go:66] Checking if "ingress-addon-legacy-736101" exists ...
	I0103 19:13:10.201441   25720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:13:10.201493   25720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:13:10.213002   25720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45989
	I0103 19:13:10.213437   25720 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:13:10.214027   25720 main.go:141] libmachine: Using API Version  1
	I0103 19:13:10.214056   25720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:13:10.214443   25720 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:13:10.214687   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetState
	I0103 19:13:10.216499   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:13:10.218669   25720 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:13:10.216960   25720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0103 19:13:10.220190   25720 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:13:10.220210   25720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 19:13:10.220232   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:13:10.220636   25720 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:13:10.221232   25720 main.go:141] libmachine: Using API Version  1
	I0103 19:13:10.221259   25720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:13:10.221756   25720 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:13:10.222304   25720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:13:10.222329   25720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:13:10.223557   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:13:10.224103   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:13:10.224130   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:13:10.224363   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:13:10.224552   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:13:10.224710   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:13:10.224862   25720 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/id_rsa Username:docker}
	I0103 19:13:10.236974   25720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I0103 19:13:10.237452   25720 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:13:10.237901   25720 main.go:141] libmachine: Using API Version  1
	I0103 19:13:10.237923   25720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:13:10.238264   25720 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:13:10.238501   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetState
	I0103 19:13:10.240408   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .DriverName
	I0103 19:13:10.240679   25720 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 19:13:10.240694   25720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 19:13:10.240709   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHHostname
	I0103 19:13:10.244189   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:13:10.244567   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:96:c9", ip: ""} in network mk-ingress-addon-legacy-736101: {Iface:virbr1 ExpiryTime:2024-01-03 20:12:23 +0000 UTC Type:0 Mac:52:54:00:01:96:c9 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ingress-addon-legacy-736101 Clientid:01:52:54:00:01:96:c9}
	I0103 19:13:10.244609   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | domain ingress-addon-legacy-736101 has defined IP address 192.168.39.191 and MAC address 52:54:00:01:96:c9 in network mk-ingress-addon-legacy-736101
	I0103 19:13:10.244881   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHPort
	I0103 19:13:10.245094   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHKeyPath
	I0103 19:13:10.245230   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .GetSSHUsername
	I0103 19:13:10.245386   25720 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/ingress-addon-legacy-736101/id_rsa Username:docker}
	I0103 19:13:10.395486   25720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:13:10.406852   25720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 19:13:10.428308   25720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 19:13:10.758757   25720 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-736101" context rescaled to 1 replicas
	I0103 19:13:10.758802   25720 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:13:10.760920   25720 out.go:177] * Verifying Kubernetes components...
	I0103 19:13:10.762684   25720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:13:11.156404   25720 main.go:141] libmachine: Making call to close driver server
	I0103 19:13:11.156438   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .Close
	I0103 19:13:11.156433   25720 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0103 19:13:11.156478   25720 main.go:141] libmachine: Making call to close driver server
	I0103 19:13:11.156490   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .Close
	I0103 19:13:11.156755   25720 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:13:11.156772   25720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:13:11.156782   25720 main.go:141] libmachine: Making call to close driver server
	I0103 19:13:11.156792   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .Close
	I0103 19:13:11.156799   25720 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:13:11.156811   25720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:13:11.156813   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Closing plugin on server side
	I0103 19:13:11.156826   25720 main.go:141] libmachine: Making call to close driver server
	I0103 19:13:11.156840   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .Close
	I0103 19:13:11.157049   25720 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:13:11.157067   25720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:13:11.157050   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Closing plugin on server side
	I0103 19:13:11.157158   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) DBG | Closing plugin on server side
	I0103 19:13:11.157171   25720 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:13:11.157194   25720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:13:11.157260   25720 kapi.go:59] client config for ingress-addon-legacy-736101: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:13:11.157597   25720 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-736101" to be "Ready" ...
	I0103 19:13:11.166317   25720 main.go:141] libmachine: Making call to close driver server
	I0103 19:13:11.166337   25720 main.go:141] libmachine: (ingress-addon-legacy-736101) Calling .Close
	I0103 19:13:11.166631   25720 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:13:11.166651   25720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:13:11.168849   25720 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 19:13:11.170539   25720 addons.go:508] enable addons completed in 990.973811ms: enabled=[storage-provisioner default-storageclass]
	I0103 19:13:11.168171   25720 node_ready.go:49] node "ingress-addon-legacy-736101" has status "Ready":"True"
	I0103 19:13:11.170572   25720 node_ready.go:38] duration metric: took 12.955826ms waiting for node "ingress-addon-legacy-736101" to be "Ready" ...
	I0103 19:13:11.170595   25720 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:13:11.210329   25720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:13.216889   25720 pod_ready.go:102] pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:15.217294   25720 pod_ready.go:102] pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:17.716860   25720 pod_ready.go:102] pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:19.716968   25720 pod_ready.go:102] pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:21.718099   25720 pod_ready.go:102] pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:24.217695   25720 pod_ready.go:102] pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:25.717014   25720 pod_ready.go:97] error getting pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-5jmvg" not found
	I0103 19:13:25.717053   25720 pod_ready.go:81] duration metric: took 14.50669242s waiting for pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace to be "Ready" ...
	E0103 19:13:25.717083   25720 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-5jmvg" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-5jmvg" not found
	I0103 19:13:25.717093   25720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-m2slm" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:27.724399   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:29.725008   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:31.726168   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:34.224289   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:36.224391   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:38.224608   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:40.224915   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:42.723984   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:45.223603   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:47.223754   25720 pod_ready.go:102] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"False"
	I0103 19:13:47.724922   25720 pod_ready.go:92] pod "coredns-66bff467f8-m2slm" in "kube-system" namespace has status "Ready":"True"
	I0103 19:13:47.724947   25720 pod_ready.go:81] duration metric: took 22.007846498s waiting for pod "coredns-66bff467f8-m2slm" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.724955   25720 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-736101" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.731188   25720 pod_ready.go:92] pod "etcd-ingress-addon-legacy-736101" in "kube-system" namespace has status "Ready":"True"
	I0103 19:13:47.731213   25720 pod_ready.go:81] duration metric: took 6.250577ms waiting for pod "etcd-ingress-addon-legacy-736101" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.731229   25720 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-736101" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.736639   25720 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-736101" in "kube-system" namespace has status "Ready":"True"
	I0103 19:13:47.736659   25720 pod_ready.go:81] duration metric: took 5.422305ms waiting for pod "kube-apiserver-ingress-addon-legacy-736101" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.736668   25720 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-736101" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.742206   25720 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-736101" in "kube-system" namespace has status "Ready":"True"
	I0103 19:13:47.742229   25720 pod_ready.go:81] duration metric: took 5.553395ms waiting for pod "kube-controller-manager-ingress-addon-legacy-736101" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.742240   25720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6ncwc" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.747303   25720 pod_ready.go:92] pod "kube-proxy-6ncwc" in "kube-system" namespace has status "Ready":"True"
	I0103 19:13:47.747325   25720 pod_ready.go:81] duration metric: took 5.077239ms waiting for pod "kube-proxy-6ncwc" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.747337   25720 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-736101" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:47.918815   25720 request.go:629] Waited for 171.361086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-736101
	I0103 19:13:48.119209   25720 request.go:629] Waited for 197.464556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/ingress-addon-legacy-736101
	I0103 19:13:48.122509   25720 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-736101" in "kube-system" namespace has status "Ready":"True"
	I0103 19:13:48.122556   25720 pod_ready.go:81] duration metric: took 375.210272ms waiting for pod "kube-scheduler-ingress-addon-legacy-736101" in "kube-system" namespace to be "Ready" ...
	I0103 19:13:48.122570   25720 pod_ready.go:38] duration metric: took 36.95194834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:13:48.122585   25720 api_server.go:52] waiting for apiserver process to appear ...
	I0103 19:13:48.122636   25720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:13:48.137192   25720 api_server.go:72] duration metric: took 37.378363202s to wait for apiserver process to appear ...
	I0103 19:13:48.137221   25720 api_server.go:88] waiting for apiserver healthz status ...
	I0103 19:13:48.137237   25720 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:13:48.143194   25720 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I0103 19:13:48.144230   25720 api_server.go:141] control plane version: v1.18.20
	I0103 19:13:48.144254   25720 api_server.go:131] duration metric: took 7.027874ms to wait for apiserver health ...
	I0103 19:13:48.144263   25720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 19:13:48.319012   25720 request.go:629] Waited for 174.685275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:13:48.324379   25720 system_pods.go:59] 7 kube-system pods found
	I0103 19:13:48.324406   25720 system_pods.go:61] "coredns-66bff467f8-m2slm" [7dcf65e4-ebad-4161-83f2-6c46cecf9bba] Running
	I0103 19:13:48.324412   25720 system_pods.go:61] "etcd-ingress-addon-legacy-736101" [b0012840-ec75-40ae-a18e-6a1bc1fba768] Running
	I0103 19:13:48.324416   25720 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-736101" [1aff37b7-ad57-4a35-b9f4-2cad83627090] Running
	I0103 19:13:48.324420   25720 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-736101" [7343fca6-09c3-4b9e-bce8-c21ae406fdae] Running
	I0103 19:13:48.324423   25720 system_pods.go:61] "kube-proxy-6ncwc" [9ee1a009-cadb-4a4e-b5a6-af873d51f8d2] Running
	I0103 19:13:48.324427   25720 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-736101" [a12319bc-d25b-4e9e-beea-7d92448f4d23] Running
	I0103 19:13:48.324430   25720 system_pods.go:61] "storage-provisioner" [b4a96904-2abf-44d1-9c05-dedc7e4702b2] Running
	I0103 19:13:48.324435   25720 system_pods.go:74] duration metric: took 180.1683ms to wait for pod list to return data ...
	I0103 19:13:48.324441   25720 default_sa.go:34] waiting for default service account to be created ...
	I0103 19:13:48.518888   25720 request.go:629] Waited for 194.360701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/default/serviceaccounts
	I0103 19:13:48.521612   25720 default_sa.go:45] found service account: "default"
	I0103 19:13:48.521634   25720 default_sa.go:55] duration metric: took 197.188219ms for default service account to be created ...
	I0103 19:13:48.521642   25720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 19:13:48.719066   25720 request.go:629] Waited for 197.359933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:13:48.725364   25720 system_pods.go:86] 7 kube-system pods found
	I0103 19:13:48.725395   25720 system_pods.go:89] "coredns-66bff467f8-m2slm" [7dcf65e4-ebad-4161-83f2-6c46cecf9bba] Running
	I0103 19:13:48.725401   25720 system_pods.go:89] "etcd-ingress-addon-legacy-736101" [b0012840-ec75-40ae-a18e-6a1bc1fba768] Running
	I0103 19:13:48.725405   25720 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-736101" [1aff37b7-ad57-4a35-b9f4-2cad83627090] Running
	I0103 19:13:48.725409   25720 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-736101" [7343fca6-09c3-4b9e-bce8-c21ae406fdae] Running
	I0103 19:13:48.725413   25720 system_pods.go:89] "kube-proxy-6ncwc" [9ee1a009-cadb-4a4e-b5a6-af873d51f8d2] Running
	I0103 19:13:48.725417   25720 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-736101" [a12319bc-d25b-4e9e-beea-7d92448f4d23] Running
	I0103 19:13:48.725420   25720 system_pods.go:89] "storage-provisioner" [b4a96904-2abf-44d1-9c05-dedc7e4702b2] Running
	I0103 19:13:48.725426   25720 system_pods.go:126] duration metric: took 203.780049ms to wait for k8s-apps to be running ...
	I0103 19:13:48.725432   25720 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:13:48.725478   25720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:13:48.739762   25720 system_svc.go:56] duration metric: took 14.317421ms WaitForService to wait for kubelet.
	I0103 19:13:48.739794   25720 kubeadm.go:581] duration metric: took 37.980969762s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:13:48.739813   25720 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:13:48.918207   25720 request.go:629] Waited for 178.317221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes
	I0103 19:13:48.921172   25720 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:13:48.921200   25720 node_conditions.go:123] node cpu capacity is 2
	I0103 19:13:48.921214   25720 node_conditions.go:105] duration metric: took 181.393037ms to run NodePressure ...
	I0103 19:13:48.921224   25720 start.go:228] waiting for startup goroutines ...
	I0103 19:13:48.921230   25720 start.go:233] waiting for cluster config update ...
	I0103 19:13:48.921239   25720 start.go:242] writing updated cluster config ...
	I0103 19:13:48.921468   25720 ssh_runner.go:195] Run: rm -f paused
	I0103 19:13:48.968512   25720 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0103 19:13:48.970687   25720 out.go:177] 
	W0103 19:13:48.972367   25720 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0103 19:13:48.973982   25720 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0103 19:13:48.975760   25720 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-736101" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 19:12:19 UTC, ends at Wed 2024-01-03 19:16:58 UTC. --
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.473843063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704309418473818348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=fc129e52-dfaa-4269-a308-e52c635c22be name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.475721744Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=76f7bc36-b2d8-4f80-b7da-ef5f93ab1181 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.475804039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=76f7bc36-b2d8-4f80-b7da-ef5f93ab1181 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.476086927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f71f81090caca710da58006a0e3c9af0ebecfe623a533060eb8dc8b32d5c21,PodSandboxId:b251289c7ed187e0192f3caab4a3ae985bee3ed29c4c3c28a57bff8c9649199e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704309411881868029,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-ftv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b0441e3-5edd-4273-bda7-60534c77d817,},Annotations:map[string]string{io.kubernetes.container.hash: b385d56d,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4821ad62ca4b81b11811629d4e7286be9acc92eb4b44ecbf45523be1d9f7e865,PodSandboxId:3b6f276d7cf718388cf9b66ef463dbaaff1146b68e7b93d3fe303c3d7d86166a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704309268540420269,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfaa1997-2011-4cec-809e-dc7487596145,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8501d7f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b8aff236e5faedaeeec412f93ee09542af2e86ccfd64440c7607ba77907fd1,PodSandboxId:aea0af10cfceec22de91df49583432d5f4bd5525966744e1d559f006b97d1df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704309192081118014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b4a96904-2abf-44d1-9c05-dedc7e4702b2,},Annotations:map[string]string{io.kubernetes.container.hash: eeacd814,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4043080fe678179ed6ca954ee8f58cbb89b0eed48125d248a0f2e2e43825ee,PodSandboxId:d405d47ff3d1c45fc36a82d80c41fbb0f9ae57bf298e6f10901f614cbdac329f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704309191628727445,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ncwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ee1a009-cadb-4a4e-b5a6-af873d
51f8d2,},Annotations:map[string]string{io.kubernetes.container.hash: c4f38920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d79623c5a0c452947ff02b96b66cf7a97f9a5285e25d96085919ec9ee5da12,PodSandboxId:1f6b280beced0a6e6bf1e3cae91c0d9931895ec68e19dea9bcb5d5b0810d78c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704309191359525032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-m2slm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dcf65e4-ebad-4161-83f2-6c46cecf9bba,},Annotations:map[string]string{io
.kubernetes.container.hash: cbf0481d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:228475afc61d5b772006d9f7fad175b3766df72d7f6ea92ea204395c9ae45937,PodSandboxId:fcdcfa381ff3d846e5c5bed128305fd8c3c31f7f8af149d97c9dfb7714ab4aa9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704309167643069210,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3e2132c6a9625aa2db15cb37b9c5ea,},Annotations:map[string]string{io.kubernetes.container.hash: c8a1a15e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7442a79a60a9cb3fc3dd151955ca50eb53be3d4e62095446bb542688c0df9cbe,PodSandboxId:7be02011341b40459276bdd7009abb0c31b76c815026d26696073bdd67feae25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704309166484677248,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd4e5f259fcd7df9705d336fe345b372b855b52442ab2d99c8e0ea11fbd36fa,PodSandboxId:2784f4eff6a10212a6a709b4feedd7487f1fd79d50e871ca62e39f019d0b674b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704309166159279829,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-a
piserver-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cd17d650e0d98998ca770acbe29fb2,},Annotations:map[string]string{io.kubernetes.container.hash: bddbba01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8819158268ecd033de51589fdcfd18d3ffedd2de11723ef031042cae0fa05df3,PodSandboxId:fc6204853ba3edeaa558420ce30e67e2d6fefa455c983da6668720cce795f71b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704309166096267281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=76f7bc36-b2d8-4f80-b7da-ef5f93ab1181 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.514804104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b4dbfe3e-b13b-444a-a1df-f6464d0b8838 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.514884936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b4dbfe3e-b13b-444a-a1df-f6464d0b8838 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.516038152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7d11ea24-5546-47d3-a138-8f9969ad3795 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.516589209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704309418516573956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=7d11ea24-5546-47d3-a138-8f9969ad3795 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.517210288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5413da25-d3d6-48f7-b9ee-255827fd181e name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.517269014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5413da25-d3d6-48f7-b9ee-255827fd181e name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.517588061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f71f81090caca710da58006a0e3c9af0ebecfe623a533060eb8dc8b32d5c21,PodSandboxId:b251289c7ed187e0192f3caab4a3ae985bee3ed29c4c3c28a57bff8c9649199e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704309411881868029,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-ftv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b0441e3-5edd-4273-bda7-60534c77d817,},Annotations:map[string]string{io.kubernetes.container.hash: b385d56d,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4821ad62ca4b81b11811629d4e7286be9acc92eb4b44ecbf45523be1d9f7e865,PodSandboxId:3b6f276d7cf718388cf9b66ef463dbaaff1146b68e7b93d3fe303c3d7d86166a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704309268540420269,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfaa1997-2011-4cec-809e-dc7487596145,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8501d7f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b8aff236e5faedaeeec412f93ee09542af2e86ccfd64440c7607ba77907fd1,PodSandboxId:aea0af10cfceec22de91df49583432d5f4bd5525966744e1d559f006b97d1df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704309192081118014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b4a96904-2abf-44d1-9c05-dedc7e4702b2,},Annotations:map[string]string{io.kubernetes.container.hash: eeacd814,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4043080fe678179ed6ca954ee8f58cbb89b0eed48125d248a0f2e2e43825ee,PodSandboxId:d405d47ff3d1c45fc36a82d80c41fbb0f9ae57bf298e6f10901f614cbdac329f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704309191628727445,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ncwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ee1a009-cadb-4a4e-b5a6-af873d
51f8d2,},Annotations:map[string]string{io.kubernetes.container.hash: c4f38920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d79623c5a0c452947ff02b96b66cf7a97f9a5285e25d96085919ec9ee5da12,PodSandboxId:1f6b280beced0a6e6bf1e3cae91c0d9931895ec68e19dea9bcb5d5b0810d78c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704309191359525032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-m2slm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dcf65e4-ebad-4161-83f2-6c46cecf9bba,},Annotations:map[string]string{io
.kubernetes.container.hash: cbf0481d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:228475afc61d5b772006d9f7fad175b3766df72d7f6ea92ea204395c9ae45937,PodSandboxId:fcdcfa381ff3d846e5c5bed128305fd8c3c31f7f8af149d97c9dfb7714ab4aa9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704309167643069210,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3e2132c6a9625aa2db15cb37b9c5ea,},Annotations:map[string]string{io.kubernetes.container.hash: c8a1a15e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7442a79a60a9cb3fc3dd151955ca50eb53be3d4e62095446bb542688c0df9cbe,PodSandboxId:7be02011341b40459276bdd7009abb0c31b76c815026d26696073bdd67feae25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704309166484677248,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd4e5f259fcd7df9705d336fe345b372b855b52442ab2d99c8e0ea11fbd36fa,PodSandboxId:2784f4eff6a10212a6a709b4feedd7487f1fd79d50e871ca62e39f019d0b674b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704309166159279829,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-a
piserver-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cd17d650e0d98998ca770acbe29fb2,},Annotations:map[string]string{io.kubernetes.container.hash: bddbba01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8819158268ecd033de51589fdcfd18d3ffedd2de11723ef031042cae0fa05df3,PodSandboxId:fc6204853ba3edeaa558420ce30e67e2d6fefa455c983da6668720cce795f71b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704309166096267281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5413da25-d3d6-48f7-b9ee-255827fd181e name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.556890654Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=33221ee9-4cef-4c37-99e8-ad573c64c912 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.556948687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=33221ee9-4cef-4c37-99e8-ad573c64c912 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.558116060Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6e0af441-8385-49e4-8381-156ad133b740 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.558669631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704309418558656183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=6e0af441-8385-49e4-8381-156ad133b740 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.559132525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=44da8b23-7ebf-4675-b940-7a7279904699 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.559193764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=44da8b23-7ebf-4675-b940-7a7279904699 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.559439175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f71f81090caca710da58006a0e3c9af0ebecfe623a533060eb8dc8b32d5c21,PodSandboxId:b251289c7ed187e0192f3caab4a3ae985bee3ed29c4c3c28a57bff8c9649199e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704309411881868029,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-ftv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b0441e3-5edd-4273-bda7-60534c77d817,},Annotations:map[string]string{io.kubernetes.container.hash: b385d56d,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4821ad62ca4b81b11811629d4e7286be9acc92eb4b44ecbf45523be1d9f7e865,PodSandboxId:3b6f276d7cf718388cf9b66ef463dbaaff1146b68e7b93d3fe303c3d7d86166a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704309268540420269,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfaa1997-2011-4cec-809e-dc7487596145,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8501d7f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b8aff236e5faedaeeec412f93ee09542af2e86ccfd64440c7607ba77907fd1,PodSandboxId:aea0af10cfceec22de91df49583432d5f4bd5525966744e1d559f006b97d1df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704309192081118014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b4a96904-2abf-44d1-9c05-dedc7e4702b2,},Annotations:map[string]string{io.kubernetes.container.hash: eeacd814,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4043080fe678179ed6ca954ee8f58cbb89b0eed48125d248a0f2e2e43825ee,PodSandboxId:d405d47ff3d1c45fc36a82d80c41fbb0f9ae57bf298e6f10901f614cbdac329f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704309191628727445,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ncwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ee1a009-cadb-4a4e-b5a6-af873d
51f8d2,},Annotations:map[string]string{io.kubernetes.container.hash: c4f38920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d79623c5a0c452947ff02b96b66cf7a97f9a5285e25d96085919ec9ee5da12,PodSandboxId:1f6b280beced0a6e6bf1e3cae91c0d9931895ec68e19dea9bcb5d5b0810d78c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704309191359525032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-m2slm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dcf65e4-ebad-4161-83f2-6c46cecf9bba,},Annotations:map[string]string{io
.kubernetes.container.hash: cbf0481d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:228475afc61d5b772006d9f7fad175b3766df72d7f6ea92ea204395c9ae45937,PodSandboxId:fcdcfa381ff3d846e5c5bed128305fd8c3c31f7f8af149d97c9dfb7714ab4aa9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704309167643069210,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3e2132c6a9625aa2db15cb37b9c5ea,},Annotations:map[string]string{io.kubernetes.container.hash: c8a1a15e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7442a79a60a9cb3fc3dd151955ca50eb53be3d4e62095446bb542688c0df9cbe,PodSandboxId:7be02011341b40459276bdd7009abb0c31b76c815026d26696073bdd67feae25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704309166484677248,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd4e5f259fcd7df9705d336fe345b372b855b52442ab2d99c8e0ea11fbd36fa,PodSandboxId:2784f4eff6a10212a6a709b4feedd7487f1fd79d50e871ca62e39f019d0b674b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704309166159279829,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-a
piserver-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cd17d650e0d98998ca770acbe29fb2,},Annotations:map[string]string{io.kubernetes.container.hash: bddbba01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8819158268ecd033de51589fdcfd18d3ffedd2de11723ef031042cae0fa05df3,PodSandboxId:fc6204853ba3edeaa558420ce30e67e2d6fefa455c983da6668720cce795f71b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704309166096267281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=44da8b23-7ebf-4675-b940-7a7279904699 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.591506121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=73b725dd-dfa0-4796-bd98-641a7ecc97fe name=/runtime.v1.RuntimeService/Version
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.591587588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=73b725dd-dfa0-4796-bd98-641a7ecc97fe name=/runtime.v1.RuntimeService/Version
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.593050004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5e74824c-b1a6-4f5a-9720-532929fa6a15 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.593647313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704309418593629526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=5e74824c-b1a6-4f5a-9720-532929fa6a15 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.594250860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8113dc1-6234-47f8-89d6-b3aa7651952e name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.594356546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8113dc1-6234-47f8-89d6-b3aa7651952e name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:16:58 ingress-addon-legacy-736101 crio[719]: time="2024-01-03 19:16:58.594567987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f71f81090caca710da58006a0e3c9af0ebecfe623a533060eb8dc8b32d5c21,PodSandboxId:b251289c7ed187e0192f3caab4a3ae985bee3ed29c4c3c28a57bff8c9649199e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704309411881868029,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-ftv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b0441e3-5edd-4273-bda7-60534c77d817,},Annotations:map[string]string{io.kubernetes.container.hash: b385d56d,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4821ad62ca4b81b11811629d4e7286be9acc92eb4b44ecbf45523be1d9f7e865,PodSandboxId:3b6f276d7cf718388cf9b66ef463dbaaff1146b68e7b93d3fe303c3d7d86166a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704309268540420269,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfaa1997-2011-4cec-809e-dc7487596145,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8501d7f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b8aff236e5faedaeeec412f93ee09542af2e86ccfd64440c7607ba77907fd1,PodSandboxId:aea0af10cfceec22de91df49583432d5f4bd5525966744e1d559f006b97d1df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704309192081118014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b4a96904-2abf-44d1-9c05-dedc7e4702b2,},Annotations:map[string]string{io.kubernetes.container.hash: eeacd814,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4043080fe678179ed6ca954ee8f58cbb89b0eed48125d248a0f2e2e43825ee,PodSandboxId:d405d47ff3d1c45fc36a82d80c41fbb0f9ae57bf298e6f10901f614cbdac329f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704309191628727445,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ncwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ee1a009-cadb-4a4e-b5a6-af873d
51f8d2,},Annotations:map[string]string{io.kubernetes.container.hash: c4f38920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d79623c5a0c452947ff02b96b66cf7a97f9a5285e25d96085919ec9ee5da12,PodSandboxId:1f6b280beced0a6e6bf1e3cae91c0d9931895ec68e19dea9bcb5d5b0810d78c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704309191359525032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-m2slm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dcf65e4-ebad-4161-83f2-6c46cecf9bba,},Annotations:map[string]string{io
.kubernetes.container.hash: cbf0481d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:228475afc61d5b772006d9f7fad175b3766df72d7f6ea92ea204395c9ae45937,PodSandboxId:fcdcfa381ff3d846e5c5bed128305fd8c3c31f7f8af149d97c9dfb7714ab4aa9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704309167643069210,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3e2132c6a9625aa2db15cb37b9c5ea,},Annotations:map[string]string{io.kubernetes.container.hash: c8a1a15e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7442a79a60a9cb3fc3dd151955ca50eb53be3d4e62095446bb542688c0df9cbe,PodSandboxId:7be02011341b40459276bdd7009abb0c31b76c815026d26696073bdd67feae25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704309166484677248,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd4e5f259fcd7df9705d336fe345b372b855b52442ab2d99c8e0ea11fbd36fa,PodSandboxId:2784f4eff6a10212a6a709b4feedd7487f1fd79d50e871ca62e39f019d0b674b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704309166159279829,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-a
piserver-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cd17d650e0d98998ca770acbe29fb2,},Annotations:map[string]string{io.kubernetes.container.hash: bddbba01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8819158268ecd033de51589fdcfd18d3ffedd2de11723ef031042cae0fa05df3,PodSandboxId:fc6204853ba3edeaa558420ce30e67e2d6fefa455c983da6668720cce795f71b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704309166096267281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ingress-addon-legacy-736101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8113dc1-6234-47f8-89d6-b3aa7651952e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                     CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	41f71f81090ca       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7   6 seconds ago       Running             hello-world-app           0                   b251289c7ed18       hello-world-app-5f5d8b66bb-ftv26
	4821ad62ca4b8       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686           2 minutes ago       Running             nginx                     0                   3b6f276d7cf71       nginx
	d7b8aff236e5f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                          3 minutes ago       Running             storage-provisioner       0                   aea0af10cfcee       storage-provisioner
	7c4043080fe67       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                          3 minutes ago       Running             kube-proxy                0                   d405d47ff3d1c       kube-proxy-6ncwc
	03d79623c5a0c       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                          3 minutes ago       Running             coredns                   0                   1f6b280beced0       coredns-66bff467f8-m2slm
	228475afc61d5       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                          4 minutes ago       Running             etcd                      0                   fcdcfa381ff3d       etcd-ingress-addon-legacy-736101
	7442a79a60a9c       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                          4 minutes ago       Running             kube-scheduler            0                   7be02011341b4       kube-scheduler-ingress-addon-legacy-736101
	bfd4e5f259fcd       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                          4 minutes ago       Running             kube-apiserver            0                   2784f4eff6a10       kube-apiserver-ingress-addon-legacy-736101
	8819158268ecd       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                          4 minutes ago       Running             kube-controller-manager   0                   fc6204853ba3e       kube-controller-manager-ingress-addon-legacy-736101
	
	
	==> coredns [03d79623c5a0c452947ff02b96b66cf7a97f9a5285e25d96085919ec9ee5da12] <==
	I0103 19:13:41.599209       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2024-01-03 19:13:11.598250159 +0000 UTC m=+0.038780690) (total time: 30.00083958s):
	Trace[2019727887]: [30.00083958s] [30.00083958s] END
	E0103 19:13:41.599362       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0103 19:13:41.603066       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2024-01-03 19:13:11.602119437 +0000 UTC m=+0.042649971) (total time: 30.00092225s):
	Trace[1427131847]: [30.00092225s] [30.00092225s] END
	E0103 19:13:41.603150       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0103 19:13:41.603220       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2024-01-03 19:13:11.602660416 +0000 UTC m=+0.043190956) (total time: 30.000544479s):
	Trace[939984059]: [30.000544479s] [30.000544479s] END
	E0103 19:13:41.603255       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	CoreDNS-1.6.7
	linux/amd64, go1.13.6, da7f65b
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 6dca4351036a5cca7eefa7c93a3dea30
	[INFO] Reloading complete
	[INFO] 127.0.0.1:49650 - 9553 "HINFO IN 7109349617767745084.8391972957827684907. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009749138s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-736101
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-736101
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=ingress-addon-legacy-736101
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T19_12_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-736101
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:16:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:16:55 +0000   Wed, 03 Jan 2024 19:12:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:16:55 +0000   Wed, 03 Jan 2024 19:12:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:16:55 +0000   Wed, 03 Jan 2024 19:12:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:16:55 +0000   Wed, 03 Jan 2024 19:12:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    ingress-addon-legacy-736101
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 da4491016b0145728267d97e158b550c
	  System UUID:                da449101-6b01-4572-8267-d97e158b550c
	  Boot ID:                    fc445a14-f8fc-42e1-b55d-6bec7366c3e9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-ftv26                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 coredns-66bff467f8-m2slm                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m49s
	  kube-system                 etcd-ingress-addon-legacy-736101                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-apiserver-ingress-addon-legacy-736101             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-736101    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-proxy-6ncwc                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-scheduler-ingress-addon-legacy-736101             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m14s (x5 over 4m14s)  kubelet     Node ingress-addon-legacy-736101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x5 over 4m14s)  kubelet     Node ingress-addon-legacy-736101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x4 over 4m14s)  kubelet     Node ingress-addon-legacy-736101 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m3s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m3s                   kubelet     Node ingress-addon-legacy-736101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s                   kubelet     Node ingress-addon-legacy-736101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s                   kubelet     Node ingress-addon-legacy-736101 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m3s                   kubelet     Node ingress-addon-legacy-736101 status is now: NodeReady
	  Normal  Starting                 3m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 3 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.087120] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.351437] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.709842] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.121849] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.992584] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.778298] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.111687] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.147818] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.110797] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.207347] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +7.654508] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +3.312407] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +10.133265] systemd-fstab-generator[1435]: Ignoring "noauto" for root device
	[Jan 3 19:13] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.692332] kauditd_printk_skb: 16 callbacks suppressed
	[ +33.120253] kauditd_printk_skb: 6 callbacks suppressed
	[Jan 3 19:14] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.617402] kauditd_printk_skb: 3 callbacks suppressed
	[Jan 3 19:16] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [228475afc61d5b772006d9f7fad175b3766df72d7f6ea92ea204395c9ae45937] <==
	raft2024/01/03 19:12:47 INFO: newRaft f21a8e08563785d2 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/03 19:12:47 INFO: f21a8e08563785d2 became follower at term 1
	raft2024/01/03 19:12:47 INFO: f21a8e08563785d2 switched to configuration voters=(17445412273030399442)
	2024-01-03 19:12:47.805440 W | auth: simple token is not cryptographically signed
	2024-01-03 19:12:47.814055 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-03 19:12:47.815145 I | etcdserver: f21a8e08563785d2 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/03 19:12:47 INFO: f21a8e08563785d2 switched to configuration voters=(17445412273030399442)
	2024-01-03 19:12:47.815676 I | etcdserver/membership: added member f21a8e08563785d2 [https://192.168.39.191:2380] to cluster 78cc5c67b96828b5
	2024-01-03 19:12:47.816939 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-03 19:12:47.817154 I | embed: listening for peers on 192.168.39.191:2380
	2024-01-03 19:12:47.817483 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/03 19:12:48 INFO: f21a8e08563785d2 is starting a new election at term 1
	raft2024/01/03 19:12:48 INFO: f21a8e08563785d2 became candidate at term 2
	raft2024/01/03 19:12:48 INFO: f21a8e08563785d2 received MsgVoteResp from f21a8e08563785d2 at term 2
	raft2024/01/03 19:12:48 INFO: f21a8e08563785d2 became leader at term 2
	raft2024/01/03 19:12:48 INFO: raft.node: f21a8e08563785d2 elected leader f21a8e08563785d2 at term 2
	2024-01-03 19:12:48.799067 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-03 19:12:48.799404 I | embed: ready to serve client requests
	2024-01-03 19:12:48.799500 I | etcdserver: published {Name:ingress-addon-legacy-736101 ClientURLs:[https://192.168.39.191:2379]} to cluster 78cc5c67b96828b5
	2024-01-03 19:12:48.799717 I | embed: ready to serve client requests
	2024-01-03 19:12:48.800829 I | embed: serving client requests on 192.168.39.191:2379
	2024-01-03 19:12:48.801595 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-03 19:12:48.801732 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-03 19:12:48.802360 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-03 19:14:02.639935 W | etcdserver: read-only range request "key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" " with result "range_response_count:3 size:13726" took too long (171.681447ms) to execute
	
	
	==> kernel <==
	 19:16:58 up 4 min,  0 users,  load average: 0.08, 0.22, 0.11
	Linux ingress-addon-legacy-736101 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bfd4e5f259fcd7df9705d336fe345b372b855b52442ab2d99c8e0ea11fbd36fa] <==
	I0103 19:12:51.722655       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0103 19:12:51.730087       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.191, ResourceVersion: 0, AdditionalErrorMsg: 
	I0103 19:12:51.786109       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 19:12:51.786154       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 19:12:51.786167       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0103 19:12:51.790638       1 cache.go:39] Caches are synced for autoregister controller
	I0103 19:12:51.790934       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0103 19:12:52.681845       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0103 19:12:52.681948       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0103 19:12:52.690510       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0103 19:12:52.696163       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0103 19:12:52.696228       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0103 19:12:53.157202       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 19:12:53.216279       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0103 19:12:53.335339       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.191]
	I0103 19:12:53.336209       1 controller.go:609] quota admission added evaluator for: endpoints
	I0103 19:12:53.340410       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0103 19:12:54.037195       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0103 19:12:55.059226       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0103 19:12:55.169189       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0103 19:12:55.609378       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 19:13:09.531899       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0103 19:13:09.684611       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0103 19:13:49.814171       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0103 19:14:23.026215       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [8819158268ecd033de51589fdcfd18d3ffedd2de11723ef031042cae0fa05df3] <==
	I0103 19:13:09.783631       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6f6a5440-3984-42f6-9175-633066a98015", APIVersion:"apps/v1", ResourceVersion:"308", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-m2slm
	I0103 19:13:09.787140       1 shared_informer.go:230] Caches are synced for stateful set 
	I0103 19:13:09.792695       1 shared_informer.go:230] Caches are synced for disruption 
	I0103 19:13:09.792821       1 disruption.go:339] Sending events to api server.
	I0103 19:13:09.798755       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"59f6a808-884a-4e00-aa69-003b41cc4c43", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-6ncwc
	E0103 19:13:09.807424       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0103 19:13:09.817111       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6f6a5440-3984-42f6-9175-633066a98015", APIVersion:"apps/v1", ResourceVersion:"308", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-5jmvg
	I0103 19:13:10.047055       1 shared_informer.go:230] Caches are synced for attach detach 
	I0103 19:13:10.069909       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0103 19:13:10.089204       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0103 19:13:10.089241       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0103 19:13:10.134948       1 shared_informer.go:230] Caches are synced for resource quota 
	I0103 19:13:10.155235       1 shared_informer.go:230] Caches are synced for resource quota 
	I0103 19:13:10.155457       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0103 19:13:10.221415       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"0f54f4b6-9647-4d1e-bd3b-e31a1311706f", APIVersion:"apps/v1", ResourceVersion:"353", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0103 19:13:10.270820       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6f6a5440-3984-42f6-9175-633066a98015", APIVersion:"apps/v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-5jmvg
	I0103 19:13:49.815625       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"282d9760-795e-4a44-bc61-32041e8c97c2", APIVersion:"apps/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0103 19:13:49.843091       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b44aa91e-2153-452b-bf17-987458928b24", APIVersion:"batch/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-nl4zh
	I0103 19:13:49.850148       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"376b6ae4-5cd9-49d7-8557-1248a62282ac", APIVersion:"apps/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-js5vk
	I0103 19:13:49.899868       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"de395acb-83fd-42dd-9d44-2e251f6c440b", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-k9jnq
	I0103 19:13:54.866571       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b44aa91e-2153-452b-bf17-987458928b24", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0103 19:13:55.879670       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"de395acb-83fd-42dd-9d44-2e251f6c440b", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0103 19:16:47.800777       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"1560d700-851e-4985-8dd4-d5aed03247c7", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0103 19:16:47.801046       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"894b0265-d3b7-4a8a-b027-3bf338f53841", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-ftv26
	E0103 19:16:55.925750       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-cpwss" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [7c4043080fe678179ed6ca954ee8f58cbb89b0eed48125d248a0f2e2e43825ee] <==
	W0103 19:13:11.913537       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0103 19:13:11.924412       1 node.go:136] Successfully retrieved node IP: 192.168.39.191
	I0103 19:13:11.924534       1 server_others.go:186] Using iptables Proxier.
	I0103 19:13:11.924741       1 server.go:583] Version: v1.18.20
	I0103 19:13:11.929157       1 config.go:315] Starting service config controller
	I0103 19:13:11.929222       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0103 19:13:11.929254       1 config.go:133] Starting endpoints config controller
	I0103 19:13:11.929274       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0103 19:13:12.029623       1 shared_informer.go:230] Caches are synced for service config 
	I0103 19:13:12.029692       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [7442a79a60a9cb3fc3dd151955ca50eb53be3d4e62095446bb542688c0df9cbe] <==
	I0103 19:12:51.789099       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0103 19:12:51.791540       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 19:12:51.791572       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 19:12:51.791759       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0103 19:12:51.796628       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 19:12:51.796758       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 19:12:51.800049       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0103 19:12:51.808712       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 19:12:51.808922       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 19:12:51.809015       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 19:12:51.809080       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0103 19:12:51.809161       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0103 19:12:51.809211       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 19:12:51.809882       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 19:12:51.810236       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0103 19:12:51.810419       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0103 19:12:52.621500       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 19:12:52.759752       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0103 19:12:52.880631       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 19:12:52.886566       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0103 19:12:52.935510       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0103 19:12:52.985940       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 19:12:53.004206       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 19:12:53.007889       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0103 19:12:55.892333       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 19:12:19 UTC, ends at Wed 2024-01-03 19:16:59 UTC. --
	Jan 03 19:14:07 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:14:07.215503    1442 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-zb75r" (UniqueName: "kubernetes.io/secret/6dd96e89-686c-4d85-acfa-160b46000af5-minikube-ingress-dns-token-zb75r") pod "kube-ingress-dns-minikube" (UID: "6dd96e89-686c-4d85-acfa-160b46000af5")
	Jan 03 19:14:23 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:14:23.212071    1442 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 03 19:14:23 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:14:23.366616    1442 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ktmcf" (UniqueName: "kubernetes.io/secret/cfaa1997-2011-4cec-809e-dc7487596145-default-token-ktmcf") pod "nginx" (UID: "cfaa1997-2011-4cec-809e-dc7487596145")
	Jan 03 19:16:47 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:47.805613    1442 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 03 19:16:47 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:47.844256    1442 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ktmcf" (UniqueName: "kubernetes.io/secret/7b0441e3-5edd-4273-bda7-60534c77d817-default-token-ktmcf") pod "hello-world-app-5f5d8b66bb-ftv26" (UID: "7b0441e3-5edd-4273-bda7-60534c77d817")
	Jan 03 19:16:48 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:48.837022    1442 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4c4178e66e52f0151728bc5210946b1948f0f933d1dbb2c57036a9414d105859
	Jan 03 19:16:48 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:48.866780    1442 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4c4178e66e52f0151728bc5210946b1948f0f933d1dbb2c57036a9414d105859
	Jan 03 19:16:48 ingress-addon-legacy-736101 kubelet[1442]: E0103 19:16:48.867413    1442 remote_runtime.go:295] ContainerStatus "4c4178e66e52f0151728bc5210946b1948f0f933d1dbb2c57036a9414d105859" from runtime service failed: rpc error: code = NotFound desc = could not find container "4c4178e66e52f0151728bc5210946b1948f0f933d1dbb2c57036a9414d105859": container with ID starting with 4c4178e66e52f0151728bc5210946b1948f0f933d1dbb2c57036a9414d105859 not found: ID does not exist
	Jan 03 19:16:48 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:48.949080    1442 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-zb75r" (UniqueName: "kubernetes.io/secret/6dd96e89-686c-4d85-acfa-160b46000af5-minikube-ingress-dns-token-zb75r") pod "6dd96e89-686c-4d85-acfa-160b46000af5" (UID: "6dd96e89-686c-4d85-acfa-160b46000af5")
	Jan 03 19:16:48 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:48.952410    1442 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd96e89-686c-4d85-acfa-160b46000af5-minikube-ingress-dns-token-zb75r" (OuterVolumeSpecName: "minikube-ingress-dns-token-zb75r") pod "6dd96e89-686c-4d85-acfa-160b46000af5" (UID: "6dd96e89-686c-4d85-acfa-160b46000af5"). InnerVolumeSpecName "minikube-ingress-dns-token-zb75r". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 19:16:49 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:49.049534    1442 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-zb75r" (UniqueName: "kubernetes.io/secret/6dd96e89-686c-4d85-acfa-160b46000af5-minikube-ingress-dns-token-zb75r") on node "ingress-addon-legacy-736101" DevicePath ""
	Jan 03 19:16:49 ingress-addon-legacy-736101 kubelet[1442]: E0103 19:16:49.643206    1442 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"4c4178e66e52f0151728bc5210946b1948f0f933d1dbb2c57036a9414d105859\": container with ID starting with 4c4178e66e52f0151728bc5210946b1948f0f933d1dbb2c57036a9414d105859 not found: ID does not exist"
	Jan 03 19:16:51 ingress-addon-legacy-736101 kubelet[1442]: E0103 19:16:51.191394    1442 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-js5vk.17a6ec61310d8733", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-js5vk", UID:"7f826d59-285b-4f4f-991a-4662957b9254", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-736101"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15d8a88cb0ae933, ext:236173035344, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15d8a88cb0ae933, ext:236173035344, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-js5vk.17a6ec61310d8733" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 03 19:16:51 ingress-addon-legacy-736101 kubelet[1442]: E0103 19:16:51.317994    1442 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-js5vk.17a6ec61310d8733", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-js5vk", UID:"7f826d59-285b-4f4f-991a-4662957b9254", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-736101"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15d8a88cb0ae933, ext:236173035344, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15d8a88d25ede64, ext:236295978125, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-js5vk.17a6ec61310d8733" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 03 19:16:53 ingress-addon-legacy-736101 kubelet[1442]: W0103 19:16:53.880883    1442 pod_container_deletor.go:77] Container "b63920ba802065cb1fba978aee961718c3a6d60f79126e9d14ddac11b1be08eb" not found in pod's containers
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:55.369457    1442 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-crdpv" (UniqueName: "kubernetes.io/secret/7f826d59-285b-4f4f-991a-4662957b9254-ingress-nginx-token-crdpv") pod "7f826d59-285b-4f4f-991a-4662957b9254" (UID: "7f826d59-285b-4f4f-991a-4662957b9254")
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:55.369520    1442 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7f826d59-285b-4f4f-991a-4662957b9254-webhook-cert") pod "7f826d59-285b-4f4f-991a-4662957b9254" (UID: "7f826d59-285b-4f4f-991a-4662957b9254")
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:55.372665    1442 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f826d59-285b-4f4f-991a-4662957b9254-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7f826d59-285b-4f4f-991a-4662957b9254" (UID: "7f826d59-285b-4f4f-991a-4662957b9254"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:55.374479    1442 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f826d59-285b-4f4f-991a-4662957b9254-ingress-nginx-token-crdpv" (OuterVolumeSpecName: "ingress-nginx-token-crdpv") pod "7f826d59-285b-4f4f-991a-4662957b9254" (UID: "7f826d59-285b-4f4f-991a-4662957b9254"). InnerVolumeSpecName "ingress-nginx-token-crdpv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:55.469911    1442 reconciler.go:319] Volume detached for volume "ingress-nginx-token-crdpv" (UniqueName: "kubernetes.io/secret/7f826d59-285b-4f4f-991a-4662957b9254-ingress-nginx-token-crdpv") on node "ingress-addon-legacy-736101" DevicePath ""
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:55.469971    1442 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7f826d59-285b-4f4f-991a-4662957b9254-webhook-cert") on node "ingress-addon-legacy-736101" DevicePath ""
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:55.533715    1442 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a0ab79aaa42ccef10393058c46ea9bf6c29820692882fe5189fe1ce0f2a1f177
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:55.561336    1442 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f42bd0b50b187b71ac6cef11f7defac2341168a779abf6246b7fcf271fb44a43
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: I0103 19:16:55.582354    1442 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a01e434ec9a5a5fe9e1a418f6fe6964c40f925d6757060e8414e49cc5472e2d9
	Jan 03 19:16:55 ingress-addon-legacy-736101 kubelet[1442]: W0103 19:16:55.647734    1442 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/7f826d59-285b-4f4f-991a-4662957b9254/volumes" does not exist
	
	
	==> storage-provisioner [d7b8aff236e5faedaeeec412f93ee09542af2e86ccfd64440c7607ba77907fd1] <==
	I0103 19:13:12.192767       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 19:13:12.203515       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 19:13:12.203681       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 19:13:12.218816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 19:13:12.219077       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-736101_6316f718-bee3-4a7f-b539-97c84bea27d6!
	I0103 19:13:12.223365       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3303ab2b-3559-4717-a6a9-534d36780464", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-736101_6316f718-bee3-4a7f-b539-97c84bea27d6 became leader
	I0103 19:13:12.319879       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-736101_6316f718-bee3-4a7f-b539-97c84bea27d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-736101 -n ingress-addon-legacy-736101
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-736101 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (172.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-lmcnh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-lmcnh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-lmcnh -- sh -c "ping -c 1 192.168.39.1": exit status 1 (203.938193ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-lmcnh): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-xlczw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-xlczw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-xlczw -- sh -c "ping -c 1 192.168.39.1": exit status 1 (198.348221ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-xlczw): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-484895 -n multinode-484895
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-484895 logs -n 25: (1.250089716s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-946151 ssh -- ls                    | mount-start-2-946151 | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-946151 ssh --                       | mount-start-2-946151 | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-946151                           | mount-start-2-946151 | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	| start   | -p mount-start-2-946151                           | mount-start-2-946151 | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-946151 | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC |                     |
	|         | --profile mount-start-2-946151                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-946151 ssh -- ls                    | mount-start-2-946151 | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-946151 ssh --                       | mount-start-2-946151 | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-946151                           | mount-start-2-946151 | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:21 UTC |
	| delete  | -p mount-start-1-932105                           | mount-start-1-932105 | jenkins | v1.32.0 | 03 Jan 24 19:21 UTC | 03 Jan 24 19:21 UTC |
	| start   | -p multinode-484895                               | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:21 UTC | 03 Jan 24 19:22 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- apply -f                   | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- rollout                    | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- get pods -o                | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- get pods -o                | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | busybox-5bc68d56bd-lmcnh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | busybox-5bc68d56bd-xlczw --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | busybox-5bc68d56bd-lmcnh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | busybox-5bc68d56bd-xlczw --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | busybox-5bc68d56bd-lmcnh -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | busybox-5bc68d56bd-xlczw -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- get pods -o                | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | busybox-5bc68d56bd-lmcnh                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC |                     |
	|         | busybox-5bc68d56bd-lmcnh -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC | 03 Jan 24 19:22 UTC |
	|         | busybox-5bc68d56bd-xlczw                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-484895 -- exec                       | multinode-484895     | jenkins | v1.32.0 | 03 Jan 24 19:22 UTC |                     |
	|         | busybox-5bc68d56bd-xlczw -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:21:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:21:00.235938   30211 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:21:00.236194   30211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:21:00.236202   30211 out.go:309] Setting ErrFile to fd 2...
	I0103 19:21:00.236207   30211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:21:00.236395   30211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:21:00.236956   30211 out.go:303] Setting JSON to false
	I0103 19:21:00.237829   30211 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3807,"bootTime":1704305853,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:21:00.237888   30211 start.go:138] virtualization: kvm guest
	I0103 19:21:00.240438   30211 out.go:177] * [multinode-484895] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:21:00.242177   30211 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:21:00.242176   30211 notify.go:220] Checking for updates...
	I0103 19:21:00.243855   30211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:21:00.245459   30211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:21:00.246956   30211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:21:00.248418   30211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:21:00.249994   30211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:21:00.251690   30211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:21:00.287415   30211 out.go:177] * Using the kvm2 driver based on user configuration
	I0103 19:21:00.289116   30211 start.go:298] selected driver: kvm2
	I0103 19:21:00.289138   30211 start.go:902] validating driver "kvm2" against <nil>
	I0103 19:21:00.289164   30211 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:21:00.289857   30211 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:21:00.290009   30211 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 19:21:00.304783   30211 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 19:21:00.304836   30211 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 19:21:00.305064   30211 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 19:21:00.305127   30211 cni.go:84] Creating CNI manager for ""
	I0103 19:21:00.305141   30211 cni.go:136] 0 nodes found, recommending kindnet
	I0103 19:21:00.305153   30211 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 19:21:00.305161   30211 start_flags.go:323] config:
	{Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:21:00.305311   30211 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:21:00.307602   30211 out.go:177] * Starting control plane node multinode-484895 in cluster multinode-484895
	I0103 19:21:00.309310   30211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:21:00.309364   30211 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 19:21:00.309390   30211 cache.go:56] Caching tarball of preloaded images
	I0103 19:21:00.309466   30211 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:21:00.309477   30211 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:21:00.309779   30211 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:21:00.309799   30211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json: {Name:mk706c842e9703bf8bb70e0d6ffd28282b1d9053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:00.309929   30211 start.go:365] acquiring machines lock for multinode-484895: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:21:00.309957   30211 start.go:369] acquired machines lock for "multinode-484895" in 16.453µs
	I0103 19:21:00.309972   30211 start.go:93] Provisioning new machine with config: &{Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:21:00.310044   30211 start.go:125] createHost starting for "" (driver="kvm2")
	I0103 19:21:00.311927   30211 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0103 19:21:00.312143   30211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:21:00.312200   30211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:21:00.326464   30211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43653
	I0103 19:21:00.326973   30211 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:21:00.327528   30211 main.go:141] libmachine: Using API Version  1
	I0103 19:21:00.327566   30211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:21:00.327896   30211 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:21:00.328169   30211 main.go:141] libmachine: (multinode-484895) Calling .GetMachineName
	I0103 19:21:00.328363   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:00.328547   30211 start.go:159] libmachine.API.Create for "multinode-484895" (driver="kvm2")
	I0103 19:21:00.328577   30211 client.go:168] LocalClient.Create starting
	I0103 19:21:00.328632   30211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem
	I0103 19:21:00.328673   30211 main.go:141] libmachine: Decoding PEM data...
	I0103 19:21:00.328695   30211 main.go:141] libmachine: Parsing certificate...
	I0103 19:21:00.328757   30211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem
	I0103 19:21:00.328778   30211 main.go:141] libmachine: Decoding PEM data...
	I0103 19:21:00.328789   30211 main.go:141] libmachine: Parsing certificate...
	I0103 19:21:00.328804   30211 main.go:141] libmachine: Running pre-create checks...
	I0103 19:21:00.328816   30211 main.go:141] libmachine: (multinode-484895) Calling .PreCreateCheck
	I0103 19:21:00.329235   30211 main.go:141] libmachine: (multinode-484895) Calling .GetConfigRaw
	I0103 19:21:00.329682   30211 main.go:141] libmachine: Creating machine...
	I0103 19:21:00.329703   30211 main.go:141] libmachine: (multinode-484895) Calling .Create
	I0103 19:21:00.329876   30211 main.go:141] libmachine: (multinode-484895) Creating KVM machine...
	I0103 19:21:00.331485   30211 main.go:141] libmachine: (multinode-484895) DBG | found existing default KVM network
	I0103 19:21:00.332211   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:00.332051   30233 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014760}
	I0103 19:21:00.337987   30211 main.go:141] libmachine: (multinode-484895) DBG | trying to create private KVM network mk-multinode-484895 192.168.39.0/24...
	I0103 19:21:00.412274   30211 main.go:141] libmachine: (multinode-484895) DBG | private KVM network mk-multinode-484895 192.168.39.0/24 created
	I0103 19:21:00.412311   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:00.412250   30233 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:21:00.412327   30211 main.go:141] libmachine: (multinode-484895) Setting up store path in /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895 ...
	I0103 19:21:00.412345   30211 main.go:141] libmachine: (multinode-484895) Building disk image from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 19:21:00.412419   30211 main.go:141] libmachine: (multinode-484895) Downloading /home/jenkins/minikube-integration/17885-9609/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0103 19:21:00.622436   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:00.622273   30233 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa...
	I0103 19:21:00.836695   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:00.836543   30233 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/multinode-484895.rawdisk...
	I0103 19:21:00.836728   30211 main.go:141] libmachine: (multinode-484895) DBG | Writing magic tar header
	I0103 19:21:00.836752   30211 main.go:141] libmachine: (multinode-484895) DBG | Writing SSH key tar header
	I0103 19:21:00.836765   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:00.836673   30233 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895 ...
	I0103 19:21:00.836786   30211 main.go:141] libmachine: (multinode-484895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895
	I0103 19:21:00.836872   30211 main.go:141] libmachine: (multinode-484895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines
	I0103 19:21:00.836895   30211 main.go:141] libmachine: (multinode-484895) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895 (perms=drwx------)
	I0103 19:21:00.836913   30211 main.go:141] libmachine: (multinode-484895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:21:00.836924   30211 main.go:141] libmachine: (multinode-484895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609
	I0103 19:21:00.836931   30211 main.go:141] libmachine: (multinode-484895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0103 19:21:00.836947   30211 main.go:141] libmachine: (multinode-484895) DBG | Checking permissions on dir: /home/jenkins
	I0103 19:21:00.836957   30211 main.go:141] libmachine: (multinode-484895) DBG | Checking permissions on dir: /home
	I0103 19:21:00.836966   30211 main.go:141] libmachine: (multinode-484895) DBG | Skipping /home - not owner
	I0103 19:21:00.836974   30211 main.go:141] libmachine: (multinode-484895) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines (perms=drwxr-xr-x)
	I0103 19:21:00.836985   30211 main.go:141] libmachine: (multinode-484895) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube (perms=drwxr-xr-x)
	I0103 19:21:00.836992   30211 main.go:141] libmachine: (multinode-484895) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609 (perms=drwxrwxr-x)
	I0103 19:21:00.837006   30211 main.go:141] libmachine: (multinode-484895) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0103 19:21:00.837015   30211 main.go:141] libmachine: (multinode-484895) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0103 19:21:00.837022   30211 main.go:141] libmachine: (multinode-484895) Creating domain...
	I0103 19:21:00.838232   30211 main.go:141] libmachine: (multinode-484895) define libvirt domain using xml: 
	I0103 19:21:00.838262   30211 main.go:141] libmachine: (multinode-484895) <domain type='kvm'>
	I0103 19:21:00.838270   30211 main.go:141] libmachine: (multinode-484895)   <name>multinode-484895</name>
	I0103 19:21:00.838276   30211 main.go:141] libmachine: (multinode-484895)   <memory unit='MiB'>2200</memory>
	I0103 19:21:00.838283   30211 main.go:141] libmachine: (multinode-484895)   <vcpu>2</vcpu>
	I0103 19:21:00.838288   30211 main.go:141] libmachine: (multinode-484895)   <features>
	I0103 19:21:00.838299   30211 main.go:141] libmachine: (multinode-484895)     <acpi/>
	I0103 19:21:00.838307   30211 main.go:141] libmachine: (multinode-484895)     <apic/>
	I0103 19:21:00.838313   30211 main.go:141] libmachine: (multinode-484895)     <pae/>
	I0103 19:21:00.838318   30211 main.go:141] libmachine: (multinode-484895)     
	I0103 19:21:00.838325   30211 main.go:141] libmachine: (multinode-484895)   </features>
	I0103 19:21:00.838333   30211 main.go:141] libmachine: (multinode-484895)   <cpu mode='host-passthrough'>
	I0103 19:21:00.838338   30211 main.go:141] libmachine: (multinode-484895)   
	I0103 19:21:00.838346   30211 main.go:141] libmachine: (multinode-484895)   </cpu>
	I0103 19:21:00.838352   30211 main.go:141] libmachine: (multinode-484895)   <os>
	I0103 19:21:00.838372   30211 main.go:141] libmachine: (multinode-484895)     <type>hvm</type>
	I0103 19:21:00.838378   30211 main.go:141] libmachine: (multinode-484895)     <boot dev='cdrom'/>
	I0103 19:21:00.838385   30211 main.go:141] libmachine: (multinode-484895)     <boot dev='hd'/>
	I0103 19:21:00.838396   30211 main.go:141] libmachine: (multinode-484895)     <bootmenu enable='no'/>
	I0103 19:21:00.838463   30211 main.go:141] libmachine: (multinode-484895)   </os>
	I0103 19:21:00.838497   30211 main.go:141] libmachine: (multinode-484895)   <devices>
	I0103 19:21:00.838514   30211 main.go:141] libmachine: (multinode-484895)     <disk type='file' device='cdrom'>
	I0103 19:21:00.838556   30211 main.go:141] libmachine: (multinode-484895)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/boot2docker.iso'/>
	I0103 19:21:00.838571   30211 main.go:141] libmachine: (multinode-484895)       <target dev='hdc' bus='scsi'/>
	I0103 19:21:00.838586   30211 main.go:141] libmachine: (multinode-484895)       <readonly/>
	I0103 19:21:00.838598   30211 main.go:141] libmachine: (multinode-484895)     </disk>
	I0103 19:21:00.838609   30211 main.go:141] libmachine: (multinode-484895)     <disk type='file' device='disk'>
	I0103 19:21:00.838624   30211 main.go:141] libmachine: (multinode-484895)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0103 19:21:00.838639   30211 main.go:141] libmachine: (multinode-484895)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/multinode-484895.rawdisk'/>
	I0103 19:21:00.838670   30211 main.go:141] libmachine: (multinode-484895)       <target dev='hda' bus='virtio'/>
	I0103 19:21:00.838693   30211 main.go:141] libmachine: (multinode-484895)     </disk>
	I0103 19:21:00.838701   30211 main.go:141] libmachine: (multinode-484895)     <interface type='network'>
	I0103 19:21:00.838711   30211 main.go:141] libmachine: (multinode-484895)       <source network='mk-multinode-484895'/>
	I0103 19:21:00.838718   30211 main.go:141] libmachine: (multinode-484895)       <model type='virtio'/>
	I0103 19:21:00.838729   30211 main.go:141] libmachine: (multinode-484895)     </interface>
	I0103 19:21:00.838745   30211 main.go:141] libmachine: (multinode-484895)     <interface type='network'>
	I0103 19:21:00.838756   30211 main.go:141] libmachine: (multinode-484895)       <source network='default'/>
	I0103 19:21:00.838765   30211 main.go:141] libmachine: (multinode-484895)       <model type='virtio'/>
	I0103 19:21:00.838770   30211 main.go:141] libmachine: (multinode-484895)     </interface>
	I0103 19:21:00.838778   30211 main.go:141] libmachine: (multinode-484895)     <serial type='pty'>
	I0103 19:21:00.838784   30211 main.go:141] libmachine: (multinode-484895)       <target port='0'/>
	I0103 19:21:00.838792   30211 main.go:141] libmachine: (multinode-484895)     </serial>
	I0103 19:21:00.838798   30211 main.go:141] libmachine: (multinode-484895)     <console type='pty'>
	I0103 19:21:00.838807   30211 main.go:141] libmachine: (multinode-484895)       <target type='serial' port='0'/>
	I0103 19:21:00.838812   30211 main.go:141] libmachine: (multinode-484895)     </console>
	I0103 19:21:00.838818   30211 main.go:141] libmachine: (multinode-484895)     <rng model='virtio'>
	I0103 19:21:00.838825   30211 main.go:141] libmachine: (multinode-484895)       <backend model='random'>/dev/random</backend>
	I0103 19:21:00.838833   30211 main.go:141] libmachine: (multinode-484895)     </rng>
	I0103 19:21:00.838838   30211 main.go:141] libmachine: (multinode-484895)     
	I0103 19:21:00.838846   30211 main.go:141] libmachine: (multinode-484895)     
	I0103 19:21:00.838851   30211 main.go:141] libmachine: (multinode-484895)   </devices>
	I0103 19:21:00.838859   30211 main.go:141] libmachine: (multinode-484895) </domain>
	I0103 19:21:00.838863   30211 main.go:141] libmachine: (multinode-484895) 
	I0103 19:21:00.843589   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:c4:69:8c in network default
	I0103 19:21:00.844373   30211 main.go:141] libmachine: (multinode-484895) Ensuring networks are active...
	I0103 19:21:00.844395   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:00.845343   30211 main.go:141] libmachine: (multinode-484895) Ensuring network default is active
	I0103 19:21:00.845740   30211 main.go:141] libmachine: (multinode-484895) Ensuring network mk-multinode-484895 is active
	I0103 19:21:00.846229   30211 main.go:141] libmachine: (multinode-484895) Getting domain xml...
	I0103 19:21:00.847098   30211 main.go:141] libmachine: (multinode-484895) Creating domain...
	I0103 19:21:02.104182   30211 main.go:141] libmachine: (multinode-484895) Waiting to get IP...
	I0103 19:21:02.104895   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:02.105308   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:02.105333   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:02.105283   30233 retry.go:31] will retry after 221.692649ms: waiting for machine to come up
	I0103 19:21:02.329107   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:02.329660   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:02.329709   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:02.329567   30233 retry.go:31] will retry after 248.191606ms: waiting for machine to come up
	I0103 19:21:02.579309   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:02.579783   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:02.579803   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:02.579749   30233 retry.go:31] will retry after 386.019476ms: waiting for machine to come up
	I0103 19:21:02.967308   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:02.967699   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:02.967745   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:02.967669   30233 retry.go:31] will retry after 488.762793ms: waiting for machine to come up
	I0103 19:21:03.458454   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:03.458957   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:03.459003   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:03.458906   30233 retry.go:31] will retry after 734.133612ms: waiting for machine to come up
	I0103 19:21:04.194315   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:04.194796   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:04.194815   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:04.194759   30233 retry.go:31] will retry after 579.552445ms: waiting for machine to come up
	I0103 19:21:04.775709   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:04.776180   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:04.776212   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:04.776120   30233 retry.go:31] will retry after 1.153844928s: waiting for machine to come up
	I0103 19:21:05.931524   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:05.932041   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:05.932067   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:05.931979   30233 retry.go:31] will retry after 927.512311ms: waiting for machine to come up
	I0103 19:21:06.861084   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:06.861471   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:06.861505   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:06.861424   30233 retry.go:31] will retry after 1.309132678s: waiting for machine to come up
	I0103 19:21:08.172502   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:08.173019   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:08.173044   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:08.172954   30233 retry.go:31] will retry after 1.616649825s: waiting for machine to come up
	I0103 19:21:09.791911   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:09.792393   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:09.792425   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:09.792329   30233 retry.go:31] will retry after 2.80855154s: waiting for machine to come up
	I0103 19:21:12.604085   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:12.604672   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:12.604704   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:12.604608   30233 retry.go:31] will retry after 3.105969974s: waiting for machine to come up
	I0103 19:21:15.712535   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:15.712988   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:15.713010   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:15.712947   30233 retry.go:31] will retry after 2.9219835s: waiting for machine to come up
	I0103 19:21:18.638132   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:18.638496   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:21:18.638537   30211 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:21:18.638446   30233 retry.go:31] will retry after 4.850873433s: waiting for machine to come up
	I0103 19:21:23.493561   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.493973   30211 main.go:141] libmachine: (multinode-484895) Found IP for machine: 192.168.39.191
	I0103 19:21:23.493994   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has current primary IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.494000   30211 main.go:141] libmachine: (multinode-484895) Reserving static IP address...
	I0103 19:21:23.494344   30211 main.go:141] libmachine: (multinode-484895) DBG | unable to find host DHCP lease matching {name: "multinode-484895", mac: "52:54:00:28:f0:8c", ip: "192.168.39.191"} in network mk-multinode-484895
	I0103 19:21:23.567398   30211 main.go:141] libmachine: (multinode-484895) DBG | Getting to WaitForSSH function...
	I0103 19:21:23.567432   30211 main.go:141] libmachine: (multinode-484895) Reserved static IP address: 192.168.39.191
	I0103 19:21:23.567484   30211 main.go:141] libmachine: (multinode-484895) Waiting for SSH to be available...
	I0103 19:21:23.570203   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.570613   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:minikube Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:23.570656   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.570852   30211 main.go:141] libmachine: (multinode-484895) DBG | Using SSH client type: external
	I0103 19:21:23.570877   30211 main.go:141] libmachine: (multinode-484895) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa (-rw-------)
	I0103 19:21:23.570917   30211 main.go:141] libmachine: (multinode-484895) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 19:21:23.570932   30211 main.go:141] libmachine: (multinode-484895) DBG | About to run SSH command:
	I0103 19:21:23.570950   30211 main.go:141] libmachine: (multinode-484895) DBG | exit 0
	I0103 19:21:23.654470   30211 main.go:141] libmachine: (multinode-484895) DBG | SSH cmd err, output: <nil>: 
	I0103 19:21:23.654806   30211 main.go:141] libmachine: (multinode-484895) KVM machine creation complete!
	I0103 19:21:23.655148   30211 main.go:141] libmachine: (multinode-484895) Calling .GetConfigRaw
	I0103 19:21:23.655675   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:23.655897   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:23.656089   30211 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0103 19:21:23.656108   30211 main.go:141] libmachine: (multinode-484895) Calling .GetState
	I0103 19:21:23.657532   30211 main.go:141] libmachine: Detecting operating system of created instance...
	I0103 19:21:23.657551   30211 main.go:141] libmachine: Waiting for SSH to be available...
	I0103 19:21:23.657559   30211 main.go:141] libmachine: Getting to WaitForSSH function...
	I0103 19:21:23.657565   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:23.659925   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.660231   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:23.660259   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.660395   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:23.660560   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:23.660736   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:23.660849   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:23.660999   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:21:23.661351   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:21:23.661363   30211 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0103 19:21:23.769752   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:21:23.769774   30211 main.go:141] libmachine: Detecting the provisioner...
	I0103 19:21:23.769782   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:23.772374   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.772839   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:23.772864   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.773032   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:23.773228   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:23.773398   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:23.773533   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:23.773706   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:21:23.774091   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:21:23.774105   30211 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0103 19:21:23.883166   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0103 19:21:23.883256   30211 main.go:141] libmachine: found compatible host: buildroot
	I0103 19:21:23.883272   30211 main.go:141] libmachine: Provisioning with buildroot...
	I0103 19:21:23.883287   30211 main.go:141] libmachine: (multinode-484895) Calling .GetMachineName
	I0103 19:21:23.883549   30211 buildroot.go:166] provisioning hostname "multinode-484895"
	I0103 19:21:23.883578   30211 main.go:141] libmachine: (multinode-484895) Calling .GetMachineName
	I0103 19:21:23.883750   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:23.886265   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.886504   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:23.886546   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:23.886637   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:23.886822   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:23.886996   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:23.887178   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:23.887375   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:21:23.887694   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:21:23.887708   30211 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484895 && echo "multinode-484895" | sudo tee /etc/hostname
	I0103 19:21:24.013306   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484895
	
	I0103 19:21:24.013335   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:24.016202   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.016671   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.016700   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.016891   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:24.017073   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.017281   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.017422   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:24.017571   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:21:24.017935   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:21:24.017958   30211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484895/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:21:24.135631   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:21:24.135662   30211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 19:21:24.135696   30211 buildroot.go:174] setting up certificates
	I0103 19:21:24.135713   30211 provision.go:83] configureAuth start
	I0103 19:21:24.135730   30211 main.go:141] libmachine: (multinode-484895) Calling .GetMachineName
	I0103 19:21:24.135997   30211 main.go:141] libmachine: (multinode-484895) Calling .GetIP
	I0103 19:21:24.138415   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.138727   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.138758   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.138882   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:24.141015   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.141364   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.141388   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.141509   30211 provision.go:138] copyHostCerts
	I0103 19:21:24.141577   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:21:24.141625   30211 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 19:21:24.141645   30211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:21:24.141703   30211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 19:21:24.141792   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:21:24.141810   30211 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 19:21:24.141816   30211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:21:24.141834   30211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 19:21:24.141886   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:21:24.141901   30211 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 19:21:24.141910   30211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:21:24.141928   30211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 19:21:24.141981   30211 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.multinode-484895 san=[192.168.39.191 192.168.39.191 localhost 127.0.0.1 minikube multinode-484895]
	I0103 19:21:24.243837   30211 provision.go:172] copyRemoteCerts
	I0103 19:21:24.243893   30211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:21:24.243915   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:24.246584   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.246919   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.246946   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.247127   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:24.247347   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.247514   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:24.247639   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:21:24.331054   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 19:21:24.331135   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:21:24.353866   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 19:21:24.353954   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0103 19:21:24.375270   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 19:21:24.375333   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 19:21:24.396275   30211 provision.go:86] duration metric: configureAuth took 260.547556ms
	I0103 19:21:24.396299   30211 buildroot.go:189] setting minikube options for container-runtime
	I0103 19:21:24.396475   30211 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:21:24.396542   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:24.398945   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.399285   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.399316   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.399472   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:24.399647   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.399798   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.399893   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:24.400064   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:21:24.400473   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:21:24.400491   30211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:21:24.682798   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:21:24.682833   30211 main.go:141] libmachine: Checking connection to Docker...
	I0103 19:21:24.682845   30211 main.go:141] libmachine: (multinode-484895) Calling .GetURL
	I0103 19:21:24.683991   30211 main.go:141] libmachine: (multinode-484895) DBG | Using libvirt version 6000000
	I0103 19:21:24.685922   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.686229   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.686257   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.686359   30211 main.go:141] libmachine: Docker is up and running!
	I0103 19:21:24.686376   30211 main.go:141] libmachine: Reticulating splines...
	I0103 19:21:24.686385   30211 client.go:171] LocalClient.Create took 24.357799964s
	I0103 19:21:24.686413   30211 start.go:167] duration metric: libmachine.API.Create for "multinode-484895" took 24.357867667s
	I0103 19:21:24.686424   30211 start.go:300] post-start starting for "multinode-484895" (driver="kvm2")
	I0103 19:21:24.686436   30211 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:21:24.686459   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:24.686726   30211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:21:24.686748   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:24.688847   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.689151   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.689179   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.689311   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:24.689511   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.689670   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:24.689830   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:21:24.772614   30211 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:21:24.776433   30211 command_runner.go:130] > NAME=Buildroot
	I0103 19:21:24.776452   30211 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0103 19:21:24.776456   30211 command_runner.go:130] > ID=buildroot
	I0103 19:21:24.776461   30211 command_runner.go:130] > VERSION_ID=2021.02.12
	I0103 19:21:24.776466   30211 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0103 19:21:24.776636   30211 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 19:21:24.776667   30211 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 19:21:24.776739   30211 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 19:21:24.776827   30211 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 19:21:24.776840   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0103 19:21:24.776940   30211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:21:24.785623   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:21:24.805726   30211 start.go:303] post-start completed in 119.286847ms
	I0103 19:21:24.805779   30211 main.go:141] libmachine: (multinode-484895) Calling .GetConfigRaw
	I0103 19:21:24.806306   30211 main.go:141] libmachine: (multinode-484895) Calling .GetIP
	I0103 19:21:24.809373   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.809741   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.809767   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.810029   30211 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:21:24.810252   30211 start.go:128] duration metric: createHost completed in 24.500197836s
	I0103 19:21:24.810275   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:24.812749   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.813067   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.813095   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.813231   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:24.813416   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.813556   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.813715   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:24.813880   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:21:24.814176   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:21:24.814188   30211 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 19:21:24.923285   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704309684.896335361
	
	I0103 19:21:24.923306   30211 fix.go:206] guest clock: 1704309684.896335361
	I0103 19:21:24.923314   30211 fix.go:219] Guest: 2024-01-03 19:21:24.896335361 +0000 UTC Remote: 2024-01-03 19:21:24.81026418 +0000 UTC m=+24.622763701 (delta=86.071181ms)
	I0103 19:21:24.923331   30211 fix.go:190] guest clock delta is within tolerance: 86.071181ms
	I0103 19:21:24.923336   30211 start.go:83] releasing machines lock for "multinode-484895", held for 24.613370817s
	I0103 19:21:24.923358   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:24.923621   30211 main.go:141] libmachine: (multinode-484895) Calling .GetIP
	I0103 19:21:24.926469   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.926976   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.927013   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.927193   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:24.927689   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:24.927902   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:24.928017   30211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:21:24.928053   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:24.928128   30211 ssh_runner.go:195] Run: cat /version.json
	I0103 19:21:24.928157   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:24.931072   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.931101   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.931425   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.931461   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.931491   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:24.931527   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:24.931594   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:24.931743   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:24.931802   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.931873   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:24.931948   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:24.931983   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:24.932068   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:21:24.932103   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:21:25.076989   30211 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0103 19:21:25.077032   30211 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0103 19:21:25.077159   30211 ssh_runner.go:195] Run: systemctl --version
	I0103 19:21:25.082647   30211 command_runner.go:130] > systemd 247 (247)
	I0103 19:21:25.082685   30211 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0103 19:21:25.082749   30211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:21:25.242713   30211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:21:25.248286   30211 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0103 19:21:25.248327   30211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 19:21:25.248393   30211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:21:25.262969   30211 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0103 19:21:25.263053   30211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 19:21:25.263070   30211 start.go:475] detecting cgroup driver to use...
	I0103 19:21:25.263132   30211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:21:25.276746   30211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:21:25.289204   30211 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:21:25.289265   30211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:21:25.301933   30211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:21:25.314885   30211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:21:25.328683   30211 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0103 19:21:25.422410   30211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:21:25.435758   30211 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0103 19:21:25.532000   30211 docker.go:219] disabling docker service ...
	I0103 19:21:25.532068   30211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:21:25.544305   30211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:21:25.554837   30211 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0103 19:21:25.555044   30211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:21:25.665991   30211 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0103 19:21:25.666138   30211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:21:25.784282   30211 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0103 19:21:25.784313   30211 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0103 19:21:25.784379   30211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:21:25.797043   30211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:21:25.813474   30211 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0103 19:21:25.813526   30211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 19:21:25.813578   30211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:21:25.822570   30211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:21:25.822641   30211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:21:25.831576   30211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:21:25.840306   30211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:21:25.848804   30211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:21:25.857839   30211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:21:25.866086   30211 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 19:21:25.866346   30211 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 19:21:25.866417   30211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 19:21:25.878715   30211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:21:25.887159   30211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:21:26.004024   30211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:21:26.172232   30211 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:21:26.172299   30211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:21:26.176530   30211 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0103 19:21:26.176578   30211 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0103 19:21:26.176590   30211 command_runner.go:130] > Device: 16h/22d	Inode: 753         Links: 1
	I0103 19:21:26.176606   30211 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:21:26.176615   30211 command_runner.go:130] > Access: 2024-01-03 19:21:26.134391487 +0000
	I0103 19:21:26.176626   30211 command_runner.go:130] > Modify: 2024-01-03 19:21:26.134391487 +0000
	I0103 19:21:26.176634   30211 command_runner.go:130] > Change: 2024-01-03 19:21:26.134391487 +0000
	I0103 19:21:26.176642   30211 command_runner.go:130] >  Birth: -
	I0103 19:21:26.176663   30211 start.go:543] Will wait 60s for crictl version
	I0103 19:21:26.176701   30211 ssh_runner.go:195] Run: which crictl
	I0103 19:21:26.180425   30211 command_runner.go:130] > /usr/bin/crictl
	I0103 19:21:26.180499   30211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:21:26.215599   30211 command_runner.go:130] > Version:  0.1.0
	I0103 19:21:26.215619   30211 command_runner.go:130] > RuntimeName:  cri-o
	I0103 19:21:26.215624   30211 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0103 19:21:26.215629   30211 command_runner.go:130] > RuntimeApiVersion:  v1
	I0103 19:21:26.215646   30211 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 19:21:26.215700   30211 ssh_runner.go:195] Run: crio --version
	I0103 19:21:26.259542   30211 command_runner.go:130] > crio version 1.24.1
	I0103 19:21:26.259570   30211 command_runner.go:130] > Version:          1.24.1
	I0103 19:21:26.259581   30211 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:21:26.259586   30211 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:21:26.259592   30211 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:21:26.259599   30211 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:21:26.259606   30211 command_runner.go:130] > Compiler:         gc
	I0103 19:21:26.259617   30211 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:21:26.259625   30211 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:21:26.259637   30211 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:21:26.259653   30211 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:21:26.259661   30211 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:21:26.259767   30211 ssh_runner.go:195] Run: crio --version
	I0103 19:21:26.306454   30211 command_runner.go:130] > crio version 1.24.1
	I0103 19:21:26.306483   30211 command_runner.go:130] > Version:          1.24.1
	I0103 19:21:26.306494   30211 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:21:26.306500   30211 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:21:26.306513   30211 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:21:26.306530   30211 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:21:26.306540   30211 command_runner.go:130] > Compiler:         gc
	I0103 19:21:26.306548   30211 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:21:26.306562   30211 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:21:26.306576   30211 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:21:26.306585   30211 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:21:26.306595   30211 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:21:26.309696   30211 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 19:21:26.311023   30211 main.go:141] libmachine: (multinode-484895) Calling .GetIP
	I0103 19:21:26.313911   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:26.314559   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:26.314589   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:26.314840   30211 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 19:21:26.318864   30211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:21:26.331042   30211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:21:26.331120   30211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:21:26.360615   30211 command_runner.go:130] > {
	I0103 19:21:26.360635   30211 command_runner.go:130] >   "images": [
	I0103 19:21:26.360639   30211 command_runner.go:130] >   ]
	I0103 19:21:26.360643   30211 command_runner.go:130] > }
	I0103 19:21:26.361710   30211 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 19:21:26.361783   30211 ssh_runner.go:195] Run: which lz4
	I0103 19:21:26.365559   30211 command_runner.go:130] > /usr/bin/lz4
	I0103 19:21:26.365601   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0103 19:21:26.365680   30211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 19:21:26.369267   30211 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 19:21:26.369418   30211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 19:21:26.369454   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 19:21:27.975104   30211 crio.go:444] Took 1.609448 seconds to copy over tarball
	I0103 19:21:27.975178   30211 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 19:21:30.678900   30211 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.703697898s)
	I0103 19:21:30.678927   30211 crio.go:451] Took 2.703798 seconds to extract the tarball
	I0103 19:21:30.678937   30211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 19:21:30.718922   30211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:21:30.780642   30211 command_runner.go:130] > {
	I0103 19:21:30.780664   30211 command_runner.go:130] >   "images": [
	I0103 19:21:30.780668   30211 command_runner.go:130] >     {
	I0103 19:21:30.780675   30211 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0103 19:21:30.780680   30211 command_runner.go:130] >       "repoTags": [
	I0103 19:21:30.780686   30211 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0103 19:21:30.780689   30211 command_runner.go:130] >       ],
	I0103 19:21:30.780694   30211 command_runner.go:130] >       "repoDigests": [
	I0103 19:21:30.780702   30211 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0103 19:21:30.780709   30211 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0103 19:21:30.780713   30211 command_runner.go:130] >       ],
	I0103 19:21:30.780718   30211 command_runner.go:130] >       "size": "65258016",
	I0103 19:21:30.780725   30211 command_runner.go:130] >       "uid": null,
	I0103 19:21:30.780729   30211 command_runner.go:130] >       "username": "",
	I0103 19:21:30.780738   30211 command_runner.go:130] >       "spec": null,
	I0103 19:21:30.780748   30211 command_runner.go:130] >       "pinned": false
	I0103 19:21:30.780752   30211 command_runner.go:130] >     },
	I0103 19:21:30.780757   30211 command_runner.go:130] >     {
	I0103 19:21:30.780769   30211 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0103 19:21:30.780776   30211 command_runner.go:130] >       "repoTags": [
	I0103 19:21:30.780781   30211 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0103 19:21:30.780787   30211 command_runner.go:130] >       ],
	I0103 19:21:30.780791   30211 command_runner.go:130] >       "repoDigests": [
	I0103 19:21:30.780799   30211 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0103 19:21:30.780808   30211 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0103 19:21:30.780812   30211 command_runner.go:130] >       ],
	I0103 19:21:30.780823   30211 command_runner.go:130] >       "size": "31470524",
	I0103 19:21:30.780829   30211 command_runner.go:130] >       "uid": null,
	I0103 19:21:30.780833   30211 command_runner.go:130] >       "username": "",
	I0103 19:21:30.780837   30211 command_runner.go:130] >       "spec": null,
	I0103 19:21:30.780841   30211 command_runner.go:130] >       "pinned": false
	I0103 19:21:30.780845   30211 command_runner.go:130] >     },
	I0103 19:21:30.780848   30211 command_runner.go:130] >     {
	I0103 19:21:30.780856   30211 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0103 19:21:30.780861   30211 command_runner.go:130] >       "repoTags": [
	I0103 19:21:30.780871   30211 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0103 19:21:30.780877   30211 command_runner.go:130] >       ],
	I0103 19:21:30.780881   30211 command_runner.go:130] >       "repoDigests": [
	I0103 19:21:30.780890   30211 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0103 19:21:30.780900   30211 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0103 19:21:30.780904   30211 command_runner.go:130] >       ],
	I0103 19:21:30.780908   30211 command_runner.go:130] >       "size": "53621675",
	I0103 19:21:30.780914   30211 command_runner.go:130] >       "uid": null,
	I0103 19:21:30.780919   30211 command_runner.go:130] >       "username": "",
	I0103 19:21:30.780925   30211 command_runner.go:130] >       "spec": null,
	I0103 19:21:30.780929   30211 command_runner.go:130] >       "pinned": false
	I0103 19:21:30.780933   30211 command_runner.go:130] >     },
	I0103 19:21:30.780936   30211 command_runner.go:130] >     {
	I0103 19:21:30.780942   30211 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0103 19:21:30.780949   30211 command_runner.go:130] >       "repoTags": [
	I0103 19:21:30.780954   30211 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0103 19:21:30.780959   30211 command_runner.go:130] >       ],
	I0103 19:21:30.780964   30211 command_runner.go:130] >       "repoDigests": [
	I0103 19:21:30.780974   30211 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0103 19:21:30.780983   30211 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0103 19:21:30.780994   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781001   30211 command_runner.go:130] >       "size": "295456551",
	I0103 19:21:30.781005   30211 command_runner.go:130] >       "uid": {
	I0103 19:21:30.781009   30211 command_runner.go:130] >         "value": "0"
	I0103 19:21:30.781012   30211 command_runner.go:130] >       },
	I0103 19:21:30.781016   30211 command_runner.go:130] >       "username": "",
	I0103 19:21:30.781020   30211 command_runner.go:130] >       "spec": null,
	I0103 19:21:30.781027   30211 command_runner.go:130] >       "pinned": false
	I0103 19:21:30.781031   30211 command_runner.go:130] >     },
	I0103 19:21:30.781036   30211 command_runner.go:130] >     {
	I0103 19:21:30.781042   30211 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0103 19:21:30.781049   30211 command_runner.go:130] >       "repoTags": [
	I0103 19:21:30.781053   30211 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0103 19:21:30.781059   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781064   30211 command_runner.go:130] >       "repoDigests": [
	I0103 19:21:30.781073   30211 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0103 19:21:30.781083   30211 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0103 19:21:30.781089   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781093   30211 command_runner.go:130] >       "size": "127226832",
	I0103 19:21:30.781099   30211 command_runner.go:130] >       "uid": {
	I0103 19:21:30.781103   30211 command_runner.go:130] >         "value": "0"
	I0103 19:21:30.781107   30211 command_runner.go:130] >       },
	I0103 19:21:30.781113   30211 command_runner.go:130] >       "username": "",
	I0103 19:21:30.781117   30211 command_runner.go:130] >       "spec": null,
	I0103 19:21:30.781122   30211 command_runner.go:130] >       "pinned": false
	I0103 19:21:30.781126   30211 command_runner.go:130] >     },
	I0103 19:21:30.781132   30211 command_runner.go:130] >     {
	I0103 19:21:30.781137   30211 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0103 19:21:30.781144   30211 command_runner.go:130] >       "repoTags": [
	I0103 19:21:30.781149   30211 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0103 19:21:30.781153   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781157   30211 command_runner.go:130] >       "repoDigests": [
	I0103 19:21:30.781167   30211 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0103 19:21:30.781176   30211 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0103 19:21:30.781184   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781189   30211 command_runner.go:130] >       "size": "123261750",
	I0103 19:21:30.781195   30211 command_runner.go:130] >       "uid": {
	I0103 19:21:30.781199   30211 command_runner.go:130] >         "value": "0"
	I0103 19:21:30.781202   30211 command_runner.go:130] >       },
	I0103 19:21:30.781206   30211 command_runner.go:130] >       "username": "",
	I0103 19:21:30.781211   30211 command_runner.go:130] >       "spec": null,
	I0103 19:21:30.781215   30211 command_runner.go:130] >       "pinned": false
	I0103 19:21:30.781219   30211 command_runner.go:130] >     },
	I0103 19:21:30.781228   30211 command_runner.go:130] >     {
	I0103 19:21:30.781236   30211 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0103 19:21:30.781241   30211 command_runner.go:130] >       "repoTags": [
	I0103 19:21:30.781246   30211 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0103 19:21:30.781250   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781254   30211 command_runner.go:130] >       "repoDigests": [
	I0103 19:21:30.781263   30211 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0103 19:21:30.781270   30211 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0103 19:21:30.781276   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781284   30211 command_runner.go:130] >       "size": "74749335",
	I0103 19:21:30.781291   30211 command_runner.go:130] >       "uid": null,
	I0103 19:21:30.781295   30211 command_runner.go:130] >       "username": "",
	I0103 19:21:30.781299   30211 command_runner.go:130] >       "spec": null,
	I0103 19:21:30.781305   30211 command_runner.go:130] >       "pinned": false
	I0103 19:21:30.781309   30211 command_runner.go:130] >     },
	I0103 19:21:30.781315   30211 command_runner.go:130] >     {
	I0103 19:21:30.781321   30211 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0103 19:21:30.781327   30211 command_runner.go:130] >       "repoTags": [
	I0103 19:21:30.781333   30211 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0103 19:21:30.781337   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781341   30211 command_runner.go:130] >       "repoDigests": [
	I0103 19:21:30.781362   30211 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0103 19:21:30.781372   30211 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0103 19:21:30.781376   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781380   30211 command_runner.go:130] >       "size": "61551410",
	I0103 19:21:30.781386   30211 command_runner.go:130] >       "uid": {
	I0103 19:21:30.781390   30211 command_runner.go:130] >         "value": "0"
	I0103 19:21:30.781396   30211 command_runner.go:130] >       },
	I0103 19:21:30.781402   30211 command_runner.go:130] >       "username": "",
	I0103 19:21:30.781406   30211 command_runner.go:130] >       "spec": null,
	I0103 19:21:30.781410   30211 command_runner.go:130] >       "pinned": false
	I0103 19:21:30.781414   30211 command_runner.go:130] >     },
	I0103 19:21:30.781418   30211 command_runner.go:130] >     {
	I0103 19:21:30.781429   30211 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0103 19:21:30.781440   30211 command_runner.go:130] >       "repoTags": [
	I0103 19:21:30.781449   30211 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0103 19:21:30.781455   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781465   30211 command_runner.go:130] >       "repoDigests": [
	I0103 19:21:30.781474   30211 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0103 19:21:30.781487   30211 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0103 19:21:30.781496   30211 command_runner.go:130] >       ],
	I0103 19:21:30.781501   30211 command_runner.go:130] >       "size": "750414",
	I0103 19:21:30.781507   30211 command_runner.go:130] >       "uid": {
	I0103 19:21:30.781519   30211 command_runner.go:130] >         "value": "65535"
	I0103 19:21:30.781525   30211 command_runner.go:130] >       },
	I0103 19:21:30.781540   30211 command_runner.go:130] >       "username": "",
	I0103 19:21:30.781550   30211 command_runner.go:130] >       "spec": null,
	I0103 19:21:30.781555   30211 command_runner.go:130] >       "pinned": false
	I0103 19:21:30.781563   30211 command_runner.go:130] >     }
	I0103 19:21:30.781569   30211 command_runner.go:130] >   ]
	I0103 19:21:30.781573   30211 command_runner.go:130] > }
	I0103 19:21:30.781691   30211 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 19:21:30.781702   30211 cache_images.go:84] Images are preloaded, skipping loading
	I0103 19:21:30.781762   30211 ssh_runner.go:195] Run: crio config
	I0103 19:21:30.830973   30211 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0103 19:21:30.830994   30211 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0103 19:21:30.831000   30211 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0103 19:21:30.831004   30211 command_runner.go:130] > #
	I0103 19:21:30.831011   30211 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0103 19:21:30.831017   30211 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0103 19:21:30.831022   30211 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0103 19:21:30.831035   30211 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0103 19:21:30.831041   30211 command_runner.go:130] > # reload'.
	I0103 19:21:30.831048   30211 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0103 19:21:30.831056   30211 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0103 19:21:30.831063   30211 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0103 19:21:30.831071   30211 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0103 19:21:30.831077   30211 command_runner.go:130] > [crio]
	I0103 19:21:30.831083   30211 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0103 19:21:30.831091   30211 command_runner.go:130] > # containers images, in this directory.
	I0103 19:21:30.831230   30211 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0103 19:21:30.831268   30211 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0103 19:21:30.831277   30211 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0103 19:21:30.831287   30211 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0103 19:21:30.831300   30211 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0103 19:21:30.831314   30211 command_runner.go:130] > storage_driver = "overlay"
	I0103 19:21:30.831324   30211 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0103 19:21:30.831336   30211 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0103 19:21:30.831347   30211 command_runner.go:130] > storage_option = [
	I0103 19:21:30.831357   30211 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0103 19:21:30.831366   30211 command_runner.go:130] > ]
	I0103 19:21:30.831378   30211 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0103 19:21:30.831391   30211 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0103 19:21:30.831400   30211 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0103 19:21:30.831432   30211 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0103 19:21:30.831449   30211 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0103 19:21:30.831458   30211 command_runner.go:130] > # always happen on a node reboot
	I0103 19:21:30.831471   30211 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0103 19:21:30.831484   30211 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0103 19:21:30.831498   30211 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0103 19:21:30.831546   30211 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0103 19:21:30.831559   30211 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0103 19:21:30.831580   30211 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0103 19:21:30.831597   30211 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0103 19:21:30.831608   30211 command_runner.go:130] > # internal_wipe = true
	I0103 19:21:30.831620   30211 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0103 19:21:30.831634   30211 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0103 19:21:30.831647   30211 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0103 19:21:30.831660   30211 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0103 19:21:30.831674   30211 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0103 19:21:30.831684   30211 command_runner.go:130] > [crio.api]
	I0103 19:21:30.831695   30211 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0103 19:21:30.831711   30211 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0103 19:21:30.831724   30211 command_runner.go:130] > # IP address on which the stream server will listen.
	I0103 19:21:30.831752   30211 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0103 19:21:30.831768   30211 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0103 19:21:30.831784   30211 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0103 19:21:30.831797   30211 command_runner.go:130] > # stream_port = "0"
	I0103 19:21:30.831810   30211 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0103 19:21:30.831820   30211 command_runner.go:130] > # stream_enable_tls = false
	I0103 19:21:30.831835   30211 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0103 19:21:30.831846   30211 command_runner.go:130] > # stream_idle_timeout = ""
	I0103 19:21:30.831858   30211 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0103 19:21:30.831872   30211 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0103 19:21:30.831882   30211 command_runner.go:130] > # minutes.
	I0103 19:21:30.831890   30211 command_runner.go:130] > # stream_tls_cert = ""
	I0103 19:21:30.831904   30211 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0103 19:21:30.831918   30211 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0103 19:21:30.831929   30211 command_runner.go:130] > # stream_tls_key = ""
	I0103 19:21:30.831940   30211 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0103 19:21:30.831954   30211 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0103 19:21:30.831966   30211 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0103 19:21:30.831974   30211 command_runner.go:130] > # stream_tls_ca = ""
	I0103 19:21:30.831990   30211 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:21:30.832001   30211 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0103 19:21:30.832016   30211 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:21:30.832025   30211 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0103 19:21:30.832052   30211 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0103 19:21:30.832070   30211 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0103 19:21:30.832083   30211 command_runner.go:130] > [crio.runtime]
	I0103 19:21:30.832097   30211 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0103 19:21:30.832110   30211 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0103 19:21:30.832120   30211 command_runner.go:130] > # "nofile=1024:2048"
	I0103 19:21:30.832137   30211 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0103 19:21:30.832147   30211 command_runner.go:130] > # default_ulimits = [
	I0103 19:21:30.832155   30211 command_runner.go:130] > # ]
	I0103 19:21:30.832167   30211 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0103 19:21:30.832176   30211 command_runner.go:130] > # no_pivot = false
	I0103 19:21:30.832187   30211 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0103 19:21:30.832201   30211 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0103 19:21:30.832212   30211 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0103 19:21:30.832226   30211 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0103 19:21:30.832238   30211 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0103 19:21:30.832253   30211 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:21:30.832265   30211 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0103 19:21:30.832276   30211 command_runner.go:130] > # Cgroup setting for conmon
	I0103 19:21:30.832294   30211 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0103 19:21:30.832305   30211 command_runner.go:130] > conmon_cgroup = "pod"
	I0103 19:21:30.832320   30211 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0103 19:21:30.832332   30211 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0103 19:21:30.832347   30211 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:21:30.832357   30211 command_runner.go:130] > conmon_env = [
	I0103 19:21:30.832371   30211 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0103 19:21:30.832379   30211 command_runner.go:130] > ]
	I0103 19:21:30.832390   30211 command_runner.go:130] > # Additional environment variables to set for all the
	I0103 19:21:30.832406   30211 command_runner.go:130] > # containers. These are overridden if set in the
	I0103 19:21:30.832423   30211 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0103 19:21:30.832434   30211 command_runner.go:130] > # default_env = [
	I0103 19:21:30.832440   30211 command_runner.go:130] > # ]
	I0103 19:21:30.832454   30211 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0103 19:21:30.832464   30211 command_runner.go:130] > # selinux = false
	I0103 19:21:30.832482   30211 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0103 19:21:30.832496   30211 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0103 19:21:30.832522   30211 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0103 19:21:30.832537   30211 command_runner.go:130] > # seccomp_profile = ""
	I0103 19:21:30.832551   30211 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0103 19:21:30.832565   30211 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0103 19:21:30.832579   30211 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0103 19:21:30.832591   30211 command_runner.go:130] > # which might increase security.
	I0103 19:21:30.832603   30211 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0103 19:21:30.832618   30211 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0103 19:21:30.832631   30211 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0103 19:21:30.832644   30211 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0103 19:21:30.832658   30211 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0103 19:21:30.832671   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:21:30.832682   30211 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0103 19:21:30.832696   30211 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0103 19:21:30.832707   30211 command_runner.go:130] > # the cgroup blockio controller.
	I0103 19:21:30.832717   30211 command_runner.go:130] > # blockio_config_file = ""
	I0103 19:21:30.832730   30211 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0103 19:21:30.832740   30211 command_runner.go:130] > # irqbalance daemon.
	I0103 19:21:30.832753   30211 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0103 19:21:30.832771   30211 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0103 19:21:30.832789   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:21:30.832800   30211 command_runner.go:130] > # rdt_config_file = ""
	I0103 19:21:30.832813   30211 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0103 19:21:30.832823   30211 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0103 19:21:30.832833   30211 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0103 19:21:30.832844   30211 command_runner.go:130] > # separate_pull_cgroup = ""
	I0103 19:21:30.832863   30211 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0103 19:21:30.832882   30211 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0103 19:21:30.832892   30211 command_runner.go:130] > # will be added.
	I0103 19:21:30.832903   30211 command_runner.go:130] > # default_capabilities = [
	I0103 19:21:30.832913   30211 command_runner.go:130] > # 	"CHOWN",
	I0103 19:21:30.832921   30211 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0103 19:21:30.832930   30211 command_runner.go:130] > # 	"FSETID",
	I0103 19:21:30.832939   30211 command_runner.go:130] > # 	"FOWNER",
	I0103 19:21:30.832949   30211 command_runner.go:130] > # 	"SETGID",
	I0103 19:21:30.832958   30211 command_runner.go:130] > # 	"SETUID",
	I0103 19:21:30.832965   30211 command_runner.go:130] > # 	"SETPCAP",
	I0103 19:21:30.832980   30211 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0103 19:21:30.832991   30211 command_runner.go:130] > # 	"KILL",
	I0103 19:21:30.832998   30211 command_runner.go:130] > # ]
	I0103 19:21:30.833009   30211 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0103 19:21:30.833023   30211 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:21:30.833033   30211 command_runner.go:130] > # default_sysctls = [
	I0103 19:21:30.833040   30211 command_runner.go:130] > # ]
	I0103 19:21:30.833052   30211 command_runner.go:130] > # List of devices on the host that a
	I0103 19:21:30.833064   30211 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0103 19:21:30.833075   30211 command_runner.go:130] > # allowed_devices = [
	I0103 19:21:30.833083   30211 command_runner.go:130] > # 	"/dev/fuse",
	I0103 19:21:30.833092   30211 command_runner.go:130] > # ]
	I0103 19:21:30.833101   30211 command_runner.go:130] > # List of additional devices. specified as
	I0103 19:21:30.833116   30211 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0103 19:21:30.833129   30211 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0103 19:21:30.833168   30211 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:21:30.833178   30211 command_runner.go:130] > # additional_devices = [
	I0103 19:21:30.833184   30211 command_runner.go:130] > # ]
	I0103 19:21:30.833196   30211 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0103 19:21:30.833206   30211 command_runner.go:130] > # cdi_spec_dirs = [
	I0103 19:21:30.833215   30211 command_runner.go:130] > # 	"/etc/cdi",
	I0103 19:21:30.833226   30211 command_runner.go:130] > # 	"/var/run/cdi",
	I0103 19:21:30.833234   30211 command_runner.go:130] > # ]
	I0103 19:21:30.833246   30211 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0103 19:21:30.833260   30211 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0103 19:21:30.833270   30211 command_runner.go:130] > # Defaults to false.
	I0103 19:21:30.833282   30211 command_runner.go:130] > # device_ownership_from_security_context = false
	I0103 19:21:30.833297   30211 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0103 19:21:30.833314   30211 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0103 19:21:30.833321   30211 command_runner.go:130] > # hooks_dir = [
	I0103 19:21:30.833329   30211 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0103 19:21:30.833338   30211 command_runner.go:130] > # ]
	I0103 19:21:30.833351   30211 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0103 19:21:30.833365   30211 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0103 19:21:30.833374   30211 command_runner.go:130] > # its default mounts from the following two files:
	I0103 19:21:30.833382   30211 command_runner.go:130] > #
	I0103 19:21:30.833400   30211 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0103 19:21:30.833416   30211 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0103 19:21:30.833429   30211 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0103 19:21:30.833437   30211 command_runner.go:130] > #
	I0103 19:21:30.833448   30211 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0103 19:21:30.833462   30211 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0103 19:21:30.833477   30211 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0103 19:21:30.833501   30211 command_runner.go:130] > #      only add mounts it finds in this file.
	I0103 19:21:30.833515   30211 command_runner.go:130] > #
	I0103 19:21:30.833523   30211 command_runner.go:130] > # default_mounts_file = ""
	I0103 19:21:30.833535   30211 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0103 19:21:30.833550   30211 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0103 19:21:30.833561   30211 command_runner.go:130] > pids_limit = 1024
	I0103 19:21:30.833575   30211 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0103 19:21:30.833591   30211 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0103 19:21:30.833604   30211 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0103 19:21:30.833621   30211 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0103 19:21:30.833631   30211 command_runner.go:130] > # log_size_max = -1
	I0103 19:21:30.833650   30211 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0103 19:21:30.833661   30211 command_runner.go:130] > # log_to_journald = false
	I0103 19:21:30.833673   30211 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0103 19:21:30.833686   30211 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0103 19:21:30.833698   30211 command_runner.go:130] > # Path to directory for container attach sockets.
	I0103 19:21:30.833712   30211 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0103 19:21:30.833725   30211 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0103 19:21:30.833734   30211 command_runner.go:130] > # bind_mount_prefix = ""
	I0103 19:21:30.833749   30211 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0103 19:21:30.833760   30211 command_runner.go:130] > # read_only = false
	I0103 19:21:30.833775   30211 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0103 19:21:30.833790   30211 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0103 19:21:30.833801   30211 command_runner.go:130] > # live configuration reload.
	I0103 19:21:30.833816   30211 command_runner.go:130] > # log_level = "info"
	I0103 19:21:30.833829   30211 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0103 19:21:30.833839   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:21:30.833849   30211 command_runner.go:130] > # log_filter = ""
	I0103 19:21:30.833863   30211 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0103 19:21:30.833882   30211 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0103 19:21:30.833894   30211 command_runner.go:130] > # separated by comma.
	I0103 19:21:30.833905   30211 command_runner.go:130] > # uid_mappings = ""
	I0103 19:21:30.833916   30211 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0103 19:21:30.833930   30211 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0103 19:21:30.833940   30211 command_runner.go:130] > # separated by comma.
	I0103 19:21:30.833948   30211 command_runner.go:130] > # gid_mappings = ""
	I0103 19:21:30.833962   30211 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0103 19:21:30.833975   30211 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:21:30.833989   30211 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:21:30.834004   30211 command_runner.go:130] > # minimum_mappable_uid = -1
	I0103 19:21:30.834014   30211 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0103 19:21:30.834028   30211 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:21:30.834042   30211 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:21:30.834053   30211 command_runner.go:130] > # minimum_mappable_gid = -1
	I0103 19:21:30.834067   30211 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0103 19:21:30.834080   30211 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0103 19:21:30.834094   30211 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0103 19:21:30.834108   30211 command_runner.go:130] > # ctr_stop_timeout = 30
	I0103 19:21:30.834127   30211 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0103 19:21:30.834140   30211 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0103 19:21:30.834152   30211 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0103 19:21:30.834164   30211 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0103 19:21:30.834173   30211 command_runner.go:130] > drop_infra_ctr = false
	I0103 19:21:30.834185   30211 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0103 19:21:30.834198   30211 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0103 19:21:30.834214   30211 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0103 19:21:30.834224   30211 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0103 19:21:30.834236   30211 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0103 19:21:30.834248   30211 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0103 19:21:30.834258   30211 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0103 19:21:30.834270   30211 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0103 19:21:30.834281   30211 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0103 19:21:30.834293   30211 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0103 19:21:30.834307   30211 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0103 19:21:30.834322   30211 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0103 19:21:30.834341   30211 command_runner.go:130] > # default_runtime = "runc"
	I0103 19:21:30.834353   30211 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0103 19:21:30.834367   30211 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0103 19:21:30.834386   30211 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0103 19:21:30.834398   30211 command_runner.go:130] > # creation as a file is not desired either.
	I0103 19:21:30.834415   30211 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0103 19:21:30.834427   30211 command_runner.go:130] > # the hostname is being managed dynamically.
	I0103 19:21:30.834438   30211 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0103 19:21:30.834444   30211 command_runner.go:130] > # ]
	I0103 19:21:30.834459   30211 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0103 19:21:30.834473   30211 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0103 19:21:30.834490   30211 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0103 19:21:30.834509   30211 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0103 19:21:30.834527   30211 command_runner.go:130] > #
	I0103 19:21:30.834537   30211 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0103 19:21:30.834549   30211 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0103 19:21:30.834560   30211 command_runner.go:130] > #  runtime_type = "oci"
	I0103 19:21:30.834574   30211 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0103 19:21:30.834589   30211 command_runner.go:130] > #  privileged_without_host_devices = false
	I0103 19:21:30.834600   30211 command_runner.go:130] > #  allowed_annotations = []
	I0103 19:21:30.834608   30211 command_runner.go:130] > # Where:
	I0103 19:21:30.834617   30211 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0103 19:21:30.834629   30211 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0103 19:21:30.834643   30211 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0103 19:21:30.834657   30211 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0103 19:21:30.834667   30211 command_runner.go:130] > #   in $PATH.
	I0103 19:21:30.834682   30211 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0103 19:21:30.834694   30211 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0103 19:21:30.834707   30211 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0103 19:21:30.834715   30211 command_runner.go:130] > #   state.
	I0103 19:21:30.834729   30211 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0103 19:21:30.834745   30211 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0103 19:21:30.834759   30211 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0103 19:21:30.834773   30211 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0103 19:21:30.834787   30211 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0103 19:21:30.834802   30211 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0103 19:21:30.834819   30211 command_runner.go:130] > #   The currently recognized values are:
	I0103 19:21:30.834834   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0103 19:21:30.834850   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0103 19:21:30.834863   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0103 19:21:30.834877   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0103 19:21:30.834892   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0103 19:21:30.834907   30211 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0103 19:21:30.834920   30211 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0103 19:21:30.834935   30211 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0103 19:21:30.834947   30211 command_runner.go:130] > #   should be moved to the container's cgroup
	I0103 19:21:30.834958   30211 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0103 19:21:30.834969   30211 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0103 19:21:30.834981   30211 command_runner.go:130] > runtime_type = "oci"
	I0103 19:21:30.834992   30211 command_runner.go:130] > runtime_root = "/run/runc"
	I0103 19:21:30.835003   30211 command_runner.go:130] > runtime_config_path = ""
	I0103 19:21:30.835011   30211 command_runner.go:130] > monitor_path = ""
	I0103 19:21:30.835026   30211 command_runner.go:130] > monitor_cgroup = ""
	I0103 19:21:30.835037   30211 command_runner.go:130] > monitor_exec_cgroup = ""
	I0103 19:21:30.835054   30211 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0103 19:21:30.835063   30211 command_runner.go:130] > # running containers
	I0103 19:21:30.835076   30211 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0103 19:21:30.835090   30211 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0103 19:21:30.835157   30211 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0103 19:21:30.835170   30211 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0103 19:21:30.835179   30211 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0103 19:21:30.835187   30211 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0103 19:21:30.835199   30211 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0103 19:21:30.835210   30211 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0103 19:21:30.835223   30211 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0103 19:21:30.835234   30211 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0103 19:21:30.835252   30211 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0103 19:21:30.835263   30211 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0103 19:21:30.835275   30211 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0103 19:21:30.835291   30211 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0103 19:21:30.835307   30211 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0103 19:21:30.835320   30211 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0103 19:21:30.835342   30211 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0103 19:21:30.835359   30211 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0103 19:21:30.835371   30211 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0103 19:21:30.835383   30211 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0103 19:21:30.835393   30211 command_runner.go:130] > # Example:
	I0103 19:21:30.835403   30211 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0103 19:21:30.835414   30211 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0103 19:21:30.835427   30211 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0103 19:21:30.835440   30211 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0103 19:21:30.835450   30211 command_runner.go:130] > # cpuset = 0
	I0103 19:21:30.835460   30211 command_runner.go:130] > # cpushares = "0-1"
	I0103 19:21:30.835468   30211 command_runner.go:130] > # Where:
	I0103 19:21:30.835479   30211 command_runner.go:130] > # The workload name is workload-type.
	I0103 19:21:30.835493   30211 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0103 19:21:30.835513   30211 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0103 19:21:30.835528   30211 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0103 19:21:30.835545   30211 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0103 19:21:30.835558   30211 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0103 19:21:30.835570   30211 command_runner.go:130] > # 
	I0103 19:21:30.835585   30211 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0103 19:21:30.835593   30211 command_runner.go:130] > #
	I0103 19:21:30.835604   30211 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0103 19:21:30.835618   30211 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0103 19:21:30.835632   30211 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0103 19:21:30.835646   30211 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0103 19:21:30.835659   30211 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0103 19:21:30.835669   30211 command_runner.go:130] > [crio.image]
	I0103 19:21:30.835683   30211 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0103 19:21:30.835694   30211 command_runner.go:130] > # default_transport = "docker://"
	I0103 19:21:30.835708   30211 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0103 19:21:30.835723   30211 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:21:30.835734   30211 command_runner.go:130] > # global_auth_file = ""
	I0103 19:21:30.835749   30211 command_runner.go:130] > # The image used to instantiate infra containers.
	I0103 19:21:30.835762   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:21:30.835773   30211 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0103 19:21:30.835786   30211 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0103 19:21:30.835803   30211 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:21:30.835814   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:21:30.835821   30211 command_runner.go:130] > # pause_image_auth_file = ""
	I0103 19:21:30.835835   30211 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0103 19:21:30.835845   30211 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0103 19:21:30.835855   30211 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0103 19:21:30.835867   30211 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0103 19:21:30.835876   30211 command_runner.go:130] > # pause_command = "/pause"
	I0103 19:21:30.835886   30211 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0103 19:21:30.835897   30211 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0103 19:21:30.835913   30211 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0103 19:21:30.835926   30211 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0103 19:21:30.835940   30211 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0103 19:21:30.835951   30211 command_runner.go:130] > # signature_policy = ""
	I0103 19:21:30.835962   30211 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0103 19:21:30.835976   30211 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0103 19:21:30.835987   30211 command_runner.go:130] > # changing them here.
	I0103 19:21:30.835996   30211 command_runner.go:130] > # insecure_registries = [
	I0103 19:21:30.836007   30211 command_runner.go:130] > # ]
	I0103 19:21:30.836019   30211 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0103 19:21:30.836031   30211 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0103 19:21:30.836042   30211 command_runner.go:130] > # image_volumes = "mkdir"
	I0103 19:21:30.836055   30211 command_runner.go:130] > # Temporary directory to use for storing big files
	I0103 19:21:30.836066   30211 command_runner.go:130] > # big_files_temporary_dir = ""
	I0103 19:21:30.836081   30211 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0103 19:21:30.836091   30211 command_runner.go:130] > # CNI plugins.
	I0103 19:21:30.836100   30211 command_runner.go:130] > [crio.network]
	I0103 19:21:30.836111   30211 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0103 19:21:30.836130   30211 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0103 19:21:30.836142   30211 command_runner.go:130] > # cni_default_network = ""
	I0103 19:21:30.836156   30211 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0103 19:21:30.836167   30211 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0103 19:21:30.836181   30211 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0103 19:21:30.836190   30211 command_runner.go:130] > # plugin_dirs = [
	I0103 19:21:30.836197   30211 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0103 19:21:30.836206   30211 command_runner.go:130] > # ]
	I0103 19:21:30.836220   30211 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0103 19:21:30.836231   30211 command_runner.go:130] > [crio.metrics]
	I0103 19:21:30.836242   30211 command_runner.go:130] > # Globally enable or disable metrics support.
	I0103 19:21:30.836252   30211 command_runner.go:130] > enable_metrics = true
	I0103 19:21:30.836264   30211 command_runner.go:130] > # Specify enabled metrics collectors.
	I0103 19:21:30.836273   30211 command_runner.go:130] > # Per default all metrics are enabled.
	I0103 19:21:30.836287   30211 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0103 19:21:30.836299   30211 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0103 19:21:30.836313   30211 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0103 19:21:30.836328   30211 command_runner.go:130] > # metrics_collectors = [
	I0103 19:21:30.836338   30211 command_runner.go:130] > # 	"operations",
	I0103 19:21:30.836350   30211 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0103 19:21:30.836359   30211 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0103 19:21:30.836369   30211 command_runner.go:130] > # 	"operations_errors",
	I0103 19:21:30.836377   30211 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0103 19:21:30.836388   30211 command_runner.go:130] > # 	"image_pulls_by_name",
	I0103 19:21:30.836399   30211 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0103 19:21:30.836408   30211 command_runner.go:130] > # 	"image_pulls_failures",
	I0103 19:21:30.836422   30211 command_runner.go:130] > # 	"image_pulls_successes",
	I0103 19:21:30.836433   30211 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0103 19:21:30.836441   30211 command_runner.go:130] > # 	"image_layer_reuse",
	I0103 19:21:30.836451   30211 command_runner.go:130] > # 	"containers_oom_total",
	I0103 19:21:30.836459   30211 command_runner.go:130] > # 	"containers_oom",
	I0103 19:21:30.836469   30211 command_runner.go:130] > # 	"processes_defunct",
	I0103 19:21:30.836479   30211 command_runner.go:130] > # 	"operations_total",
	I0103 19:21:30.836489   30211 command_runner.go:130] > # 	"operations_latency_seconds",
	I0103 19:21:30.836501   30211 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0103 19:21:30.836516   30211 command_runner.go:130] > # 	"operations_errors_total",
	I0103 19:21:30.836525   30211 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0103 19:21:30.836535   30211 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0103 19:21:30.836544   30211 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0103 19:21:30.836555   30211 command_runner.go:130] > # 	"image_pulls_success_total",
	I0103 19:21:30.836564   30211 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0103 19:21:30.836576   30211 command_runner.go:130] > # 	"containers_oom_count_total",
	I0103 19:21:30.836585   30211 command_runner.go:130] > # ]
	I0103 19:21:30.836596   30211 command_runner.go:130] > # The port on which the metrics server will listen.
	I0103 19:21:30.836613   30211 command_runner.go:130] > # metrics_port = 9090
	I0103 19:21:30.836625   30211 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0103 19:21:30.836633   30211 command_runner.go:130] > # metrics_socket = ""
	I0103 19:21:30.836645   30211 command_runner.go:130] > # The certificate for the secure metrics server.
	I0103 19:21:30.836663   30211 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0103 19:21:30.836676   30211 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0103 19:21:30.836688   30211 command_runner.go:130] > # certificate on any modification event.
	I0103 19:21:30.836698   30211 command_runner.go:130] > # metrics_cert = ""
	I0103 19:21:30.836710   30211 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0103 19:21:30.836720   30211 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0103 19:21:30.836730   30211 command_runner.go:130] > # metrics_key = ""
	I0103 19:21:30.836743   30211 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0103 19:21:30.836753   30211 command_runner.go:130] > [crio.tracing]
	I0103 19:21:30.836766   30211 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0103 19:21:30.836777   30211 command_runner.go:130] > # enable_tracing = false
	I0103 19:21:30.836787   30211 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0103 19:21:30.836799   30211 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0103 19:21:30.836809   30211 command_runner.go:130] > # Number of samples to collect per million spans.
	I0103 19:21:30.836823   30211 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0103 19:21:30.836837   30211 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0103 19:21:30.836847   30211 command_runner.go:130] > [crio.stats]
	I0103 19:21:30.836860   30211 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0103 19:21:30.836872   30211 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0103 19:21:30.836882   30211 command_runner.go:130] > # stats_collection_period = 0
	I0103 19:21:30.836945   30211 command_runner.go:130] ! time="2024-01-03 19:21:30.808554150Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0103 19:21:30.836967   30211 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0103 19:21:30.837051   30211 cni.go:84] Creating CNI manager for ""
	I0103 19:21:30.837063   30211 cni.go:136] 1 nodes found, recommending kindnet
	I0103 19:21:30.837086   30211 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:21:30.837119   30211 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.191 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484895 NodeName:multinode-484895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 19:21:30.837278   30211 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-484895"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:21:30.837363   30211 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-484895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:21:30.837433   30211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 19:21:30.846003   30211 command_runner.go:130] > kubeadm
	I0103 19:21:30.846032   30211 command_runner.go:130] > kubectl
	I0103 19:21:30.846054   30211 command_runner.go:130] > kubelet
	I0103 19:21:30.846092   30211 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:21:30.846157   30211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 19:21:30.853999   30211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0103 19:21:30.868868   30211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 19:21:30.883441   30211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0103 19:21:30.900029   30211 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I0103 19:21:30.903588   30211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:21:30.915965   30211 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895 for IP: 192.168.39.191
	I0103 19:21:30.915994   30211 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:30.916159   30211 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 19:21:30.916203   30211 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 19:21:30.916280   30211 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key
	I0103 19:21:30.916296   30211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt with IP's: []
	I0103 19:21:31.097989   30211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt ...
	I0103 19:21:31.098021   30211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt: {Name:mk311b90d7663407625bfafda784efbe35d83f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:31.098174   30211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key ...
	I0103 19:21:31.098185   30211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key: {Name:mk618e9009c7d2a258a368d3c525545dd3ca442c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:31.098268   30211 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key.6f081b7d
	I0103 19:21:31.098282   30211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.crt.6f081b7d with IP's: [192.168.39.191 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 19:21:31.176133   30211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.crt.6f081b7d ...
	I0103 19:21:31.176159   30211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.crt.6f081b7d: {Name:mk755c4dd10fbd819b86f7c6cb63f2dc7425d4a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:31.176301   30211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key.6f081b7d ...
	I0103 19:21:31.176313   30211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key.6f081b7d: {Name:mka05e2c6d307bd01b88729b68cbb74d4c2ad674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:31.176384   30211 certs.go:337] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.crt.6f081b7d -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.crt
	I0103 19:21:31.176449   30211 certs.go:341] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key.6f081b7d -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key
	I0103 19:21:31.176496   30211 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.key
	I0103 19:21:31.176508   30211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.crt with IP's: []
	I0103 19:21:31.252606   30211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.crt ...
	I0103 19:21:31.252634   30211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.crt: {Name:mk4f8e37a65f56737a91f00a291d06eb63e09784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:31.252777   30211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.key ...
	I0103 19:21:31.252789   30211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.key: {Name:mk81970e12ab6934ccc7df6d33559a3b95323937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:31.252850   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0103 19:21:31.252867   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0103 19:21:31.252876   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0103 19:21:31.252886   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0103 19:21:31.252896   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 19:21:31.252906   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 19:21:31.252923   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 19:21:31.252936   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 19:21:31.252979   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 19:21:31.253016   30211 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 19:21:31.253029   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:21:31.253054   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:21:31.253076   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:21:31.253100   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 19:21:31.253141   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:21:31.253164   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:21:31.253176   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0103 19:21:31.253190   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0103 19:21:31.253781   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 19:21:31.278839   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 19:21:31.300211   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 19:21:31.321953   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 19:21:31.343462   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:21:31.371512   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 19:21:31.394785   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:21:31.417978   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 19:21:31.439554   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:21:31.461304   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 19:21:31.483937   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 19:21:31.506245   30211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 19:21:31.521341   30211 ssh_runner.go:195] Run: openssl version
	I0103 19:21:31.526660   30211 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0103 19:21:31.526734   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 19:21:31.536047   30211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 19:21:31.540382   30211 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:21:31.540612   30211 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:21:31.540717   30211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 19:21:31.546103   30211 command_runner.go:130] > 3ec20f2e
	I0103 19:21:31.546195   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:21:31.556007   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:21:31.565685   30211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:21:31.570059   30211 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:21:31.570238   30211 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:21:31.570339   30211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:21:31.575757   30211 command_runner.go:130] > b5213941
	I0103 19:21:31.575837   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:21:31.585457   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 19:21:31.594730   30211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 19:21:31.599304   30211 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:21:31.599483   30211 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:21:31.599538   30211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 19:21:31.604813   30211 command_runner.go:130] > 51391683
	I0103 19:21:31.605078   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 19:21:31.615078   30211 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:21:31.619116   30211 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:21:31.619157   30211 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:21:31.619207   30211 kubeadm.go:404] StartCluster: {Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:21:31.619273   30211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 19:21:31.619312   30211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 19:21:31.658915   30211 cri.go:89] found id: ""
	I0103 19:21:31.658972   30211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 19:21:31.667971   30211 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0103 19:21:31.667992   30211 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0103 19:21:31.668002   30211 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0103 19:21:31.668064   30211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 19:21:31.676954   30211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 19:21:31.685977   30211 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0103 19:21:31.686025   30211 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0103 19:21:31.686037   30211 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0103 19:21:31.686048   30211 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 19:21:31.686096   30211 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 19:21:31.686132   30211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0103 19:21:31.792948   30211 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0103 19:21:31.792976   30211 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0103 19:21:31.793046   30211 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 19:21:31.793063   30211 command_runner.go:130] > [preflight] Running pre-flight checks
	I0103 19:21:32.027454   30211 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 19:21:32.027487   30211 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 19:21:32.027616   30211 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 19:21:32.027628   30211 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 19:21:32.027758   30211 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 19:21:32.027770   30211 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 19:21:32.232053   30211 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 19:21:32.516412   30211 out.go:204]   - Generating certificates and keys ...
	I0103 19:21:32.232162   30211 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 19:21:32.516524   30211 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 19:21:32.516544   30211 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0103 19:21:32.516606   30211 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 19:21:32.516613   30211 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0103 19:21:32.516686   30211 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 19:21:32.516694   30211 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 19:21:32.611759   30211 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 19:21:32.611788   30211 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0103 19:21:32.838537   30211 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 19:21:32.838574   30211 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0103 19:21:32.911259   30211 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 19:21:32.911286   30211 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0103 19:21:33.134455   30211 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 19:21:33.134484   30211 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0103 19:21:33.134679   30211 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-484895] and IPs [192.168.39.191 127.0.0.1 ::1]
	I0103 19:21:33.134709   30211 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-484895] and IPs [192.168.39.191 127.0.0.1 ::1]
	I0103 19:21:33.489925   30211 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 19:21:33.489963   30211 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0103 19:21:33.490239   30211 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-484895] and IPs [192.168.39.191 127.0.0.1 ::1]
	I0103 19:21:33.490259   30211 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-484895] and IPs [192.168.39.191 127.0.0.1 ::1]
	I0103 19:21:33.667039   30211 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 19:21:33.667076   30211 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 19:21:34.061121   30211 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 19:21:34.061150   30211 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 19:21:34.117026   30211 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 19:21:34.117060   30211 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0103 19:21:34.117131   30211 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 19:21:34.117157   30211 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 19:21:34.242552   30211 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 19:21:34.242581   30211 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 19:21:34.449433   30211 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 19:21:34.449461   30211 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 19:21:35.192613   30211 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 19:21:35.192632   30211 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 19:21:35.335531   30211 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 19:21:35.335564   30211 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 19:21:35.336372   30211 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 19:21:35.336392   30211 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 19:21:35.339357   30211 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 19:21:35.339389   30211 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 19:21:35.341389   30211 out.go:204]   - Booting up control plane ...
	I0103 19:21:35.341595   30211 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 19:21:35.341616   30211 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 19:21:35.341718   30211 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 19:21:35.341727   30211 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 19:21:35.341827   30211 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 19:21:35.341844   30211 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 19:21:35.357072   30211 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:21:35.357121   30211 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:21:35.359494   30211 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:21:35.359524   30211 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:21:35.359566   30211 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 19:21:35.359582   30211 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0103 19:21:35.480104   30211 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 19:21:35.480117   30211 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 19:21:42.480986   30211 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004862 seconds
	I0103 19:21:42.481034   30211 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.004862 seconds
	I0103 19:21:42.481194   30211 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 19:21:42.481210   30211 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 19:21:42.494596   30211 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 19:21:42.494630   30211 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 19:21:43.026698   30211 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 19:21:43.026724   30211 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0103 19:21:43.026958   30211 kubeadm.go:322] [mark-control-plane] Marking the node multinode-484895 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 19:21:43.026980   30211 command_runner.go:130] > [mark-control-plane] Marking the node multinode-484895 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 19:21:43.543719   30211 kubeadm.go:322] [bootstrap-token] Using token: ngmvpz.qxo5g2mkp8yjf3av
	I0103 19:21:43.545506   30211 out.go:204]   - Configuring RBAC rules ...
	I0103 19:21:43.543769   30211 command_runner.go:130] > [bootstrap-token] Using token: ngmvpz.qxo5g2mkp8yjf3av
	I0103 19:21:43.545640   30211 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 19:21:43.545678   30211 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 19:21:43.553089   30211 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 19:21:43.553091   30211 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 19:21:43.566386   30211 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 19:21:43.566414   30211 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 19:21:43.571105   30211 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 19:21:43.571137   30211 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 19:21:43.581994   30211 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 19:21:43.582022   30211 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 19:21:43.589504   30211 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 19:21:43.589527   30211 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 19:21:43.608831   30211 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 19:21:43.608854   30211 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 19:21:43.838724   30211 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 19:21:43.838786   30211 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0103 19:21:43.959468   30211 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 19:21:43.959492   30211 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0103 19:21:43.959496   30211 kubeadm.go:322] 
	I0103 19:21:43.959553   30211 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 19:21:43.959560   30211 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0103 19:21:43.959566   30211 kubeadm.go:322] 
	I0103 19:21:43.959644   30211 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 19:21:43.959657   30211 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0103 19:21:43.959660   30211 kubeadm.go:322] 
	I0103 19:21:43.959692   30211 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 19:21:43.959716   30211 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0103 19:21:43.959811   30211 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 19:21:43.959838   30211 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 19:21:43.959918   30211 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 19:21:43.959928   30211 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 19:21:43.959934   30211 kubeadm.go:322] 
	I0103 19:21:43.960010   30211 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0103 19:21:43.960018   30211 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0103 19:21:43.960024   30211 kubeadm.go:322] 
	I0103 19:21:43.960101   30211 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 19:21:43.960112   30211 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 19:21:43.960120   30211 kubeadm.go:322] 
	I0103 19:21:43.960189   30211 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 19:21:43.960199   30211 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0103 19:21:43.960289   30211 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 19:21:43.960308   30211 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 19:21:43.960415   30211 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 19:21:43.960426   30211 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 19:21:43.960431   30211 kubeadm.go:322] 
	I0103 19:21:43.960552   30211 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 19:21:43.960564   30211 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0103 19:21:43.960659   30211 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 19:21:43.960669   30211 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0103 19:21:43.960675   30211 kubeadm.go:322] 
	I0103 19:21:43.960804   30211 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ngmvpz.qxo5g2mkp8yjf3av \
	I0103 19:21:43.960825   30211 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ngmvpz.qxo5g2mkp8yjf3av \
	I0103 19:21:43.960969   30211 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 \
	I0103 19:21:43.960979   30211 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 \
	I0103 19:21:43.961002   30211 kubeadm.go:322] 	--control-plane 
	I0103 19:21:43.961007   30211 command_runner.go:130] > 	--control-plane 
	I0103 19:21:43.961016   30211 kubeadm.go:322] 
	I0103 19:21:43.961140   30211 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 19:21:43.961150   30211 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0103 19:21:43.961155   30211 kubeadm.go:322] 
	I0103 19:21:43.961279   30211 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ngmvpz.qxo5g2mkp8yjf3av \
	I0103 19:21:43.961307   30211 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ngmvpz.qxo5g2mkp8yjf3av \
	I0103 19:21:43.961419   30211 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 
	I0103 19:21:43.961430   30211 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 
	I0103 19:21:43.962265   30211 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 19:21:43.962285   30211 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 19:21:43.962314   30211 cni.go:84] Creating CNI manager for ""
	I0103 19:21:43.962325   30211 cni.go:136] 1 nodes found, recommending kindnet
	I0103 19:21:43.964304   30211 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0103 19:21:43.965856   30211 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 19:21:43.985703   30211 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0103 19:21:43.985725   30211 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0103 19:21:43.985732   30211 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0103 19:21:43.985739   30211 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:21:43.985755   30211 command_runner.go:130] > Access: 2024-01-03 19:21:12.720055225 +0000
	I0103 19:21:43.985760   30211 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0103 19:21:43.985768   30211 command_runner.go:130] > Change: 2024-01-03 19:21:11.081055225 +0000
	I0103 19:21:43.985772   30211 command_runner.go:130] >  Birth: -
	I0103 19:21:43.986717   30211 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 19:21:43.986734   30211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 19:21:44.010828   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 19:21:44.964651   30211 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0103 19:21:44.971763   30211 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0103 19:21:44.985442   30211 command_runner.go:130] > serviceaccount/kindnet created
	I0103 19:21:45.005270   30211 command_runner.go:130] > daemonset.apps/kindnet created
	I0103 19:21:45.009175   30211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 19:21:45.009323   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=multinode-484895 minikube.k8s.io/updated_at=2024_01_03T19_21_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:45.009329   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:45.049127   30211 command_runner.go:130] > -16
	I0103 19:21:45.049208   30211 ops.go:34] apiserver oom_adj: -16
	I0103 19:21:45.173302   30211 command_runner.go:130] > node/multinode-484895 labeled
	I0103 19:21:45.173383   30211 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0103 19:21:45.173507   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:45.300870   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:45.674504   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:45.773811   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:46.174425   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:46.262558   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:46.673730   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:46.754698   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:47.174380   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:47.258165   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:47.673803   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:47.757411   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:48.173966   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:48.250781   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:48.674408   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:48.751156   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:49.174491   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:49.257001   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:49.674395   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:49.750845   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:50.174370   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:50.279386   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:50.674376   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:50.767836   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:51.174462   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:51.273852   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:51.674491   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:51.769870   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:52.174600   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:52.271094   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:52.673676   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:52.752977   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:53.173557   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:53.260586   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:53.674174   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:53.760844   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:54.173886   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:54.279355   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:54.674549   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:54.778734   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:55.174443   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:55.258285   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:55.674184   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:55.757830   30211 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:21:56.174494   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:21:56.337190   30211 command_runner.go:130] > NAME      SECRETS   AGE
	I0103 19:21:56.337212   30211 command_runner.go:130] > default   0         0s
	I0103 19:21:56.338749   30211 kubeadm.go:1088] duration metric: took 11.329482865s to wait for elevateKubeSystemPrivileges.
	I0103 19:21:56.338782   30211 kubeadm.go:406] StartCluster complete in 24.719580507s
	I0103 19:21:56.338802   30211 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:56.338890   30211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:21:56.339785   30211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:21:56.340030   30211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 19:21:56.340178   30211 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 19:21:56.340272   30211 addons.go:69] Setting storage-provisioner=true in profile "multinode-484895"
	I0103 19:21:56.340293   30211 addons.go:69] Setting default-storageclass=true in profile "multinode-484895"
	I0103 19:21:56.340299   30211 addons.go:237] Setting addon storage-provisioner=true in "multinode-484895"
	I0103 19:21:56.340308   30211 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:21:56.340323   30211 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-484895"
	I0103 19:21:56.340365   30211 host.go:66] Checking if "multinode-484895" exists ...
	I0103 19:21:56.340443   30211 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:21:56.340743   30211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:21:56.340753   30211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:21:56.340797   30211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:21:56.340769   30211 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:21:56.340876   30211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:21:56.341495   30211 cert_rotation.go:137] Starting client certificate rotation controller
	I0103 19:21:56.341760   30211 round_trippers.go:463] GET https://192.168.39.191:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:21:56.341776   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:56.341786   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:56.341797   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:56.355420   30211 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0103 19:21:56.355448   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:56.355460   30211 round_trippers.go:580]     Content-Length: 291
	I0103 19:21:56.355469   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:56 GMT
	I0103 19:21:56.355479   30211 round_trippers.go:580]     Audit-Id: 4af87446-abb1-47e4-92d0-e01aec6c8082
	I0103 19:21:56.355487   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:56.355501   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:56.355508   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:56.355516   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:56.355546   30211 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e2317390-8a66-46be-8656-5adca86177ea","resourceVersion":"234","creationTimestamp":"2024-01-03T19:21:43Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0103 19:21:56.356072   30211 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e2317390-8a66-46be-8656-5adca86177ea","resourceVersion":"234","creationTimestamp":"2024-01-03T19:21:43Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0103 19:21:56.356142   30211 round_trippers.go:463] PUT https://192.168.39.191:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:21:56.356154   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:56.356165   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:56.356178   30211 round_trippers.go:473]     Content-Type: application/json
	I0103 19:21:56.356195   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:56.356667   30211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37769
	I0103 19:21:56.357121   30211 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:21:56.357580   30211 main.go:141] libmachine: Using API Version  1
	I0103 19:21:56.357599   30211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:21:56.358002   30211 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:21:56.359163   30211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:21:56.359212   30211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:21:56.359226   30211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0103 19:21:56.359587   30211 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:21:56.360036   30211 main.go:141] libmachine: Using API Version  1
	I0103 19:21:56.360060   30211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:21:56.360381   30211 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:21:56.360543   30211 main.go:141] libmachine: (multinode-484895) Calling .GetState
	I0103 19:21:56.363188   30211 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:21:56.363429   30211 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:21:56.363635   30211 addons.go:237] Setting addon default-storageclass=true in "multinode-484895"
	I0103 19:21:56.363666   30211 host.go:66] Checking if "multinode-484895" exists ...
	I0103 19:21:56.363960   30211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:21:56.363987   30211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:21:56.373446   30211 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0103 19:21:56.373467   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:56.373474   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:56.373496   30211 round_trippers.go:580]     Content-Length: 291
	I0103 19:21:56.373502   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:56 GMT
	I0103 19:21:56.373508   30211 round_trippers.go:580]     Audit-Id: f51bf8a0-4784-4886-af23-c00fa6461fa0
	I0103 19:21:56.373516   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:56.373524   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:56.373534   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:56.373988   30211 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e2317390-8a66-46be-8656-5adca86177ea","resourceVersion":"334","creationTimestamp":"2024-01-03T19:21:43Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0103 19:21:56.374929   30211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
	I0103 19:21:56.375315   30211 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:21:56.375794   30211 main.go:141] libmachine: Using API Version  1
	I0103 19:21:56.375820   30211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:21:56.376203   30211 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:21:56.376385   30211 main.go:141] libmachine: (multinode-484895) Calling .GetState
	I0103 19:21:56.378048   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:56.380160   30211 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:21:56.378364   30211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
	I0103 19:21:56.381494   30211 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:21:56.381517   30211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 19:21:56.381537   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:56.381707   30211 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:21:56.382182   30211 main.go:141] libmachine: Using API Version  1
	I0103 19:21:56.382200   30211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:21:56.382592   30211 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:21:56.383318   30211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:21:56.383370   30211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:21:56.384516   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:56.384897   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:56.384927   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:56.385083   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:56.385273   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:56.385384   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:56.385495   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:21:56.397531   30211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42337
	I0103 19:21:56.397914   30211 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:21:56.398383   30211 main.go:141] libmachine: Using API Version  1
	I0103 19:21:56.398406   30211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:21:56.398746   30211 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:21:56.398948   30211 main.go:141] libmachine: (multinode-484895) Calling .GetState
	I0103 19:21:56.400538   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:21:56.400779   30211 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 19:21:56.400792   30211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 19:21:56.400806   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:21:56.403252   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:56.403635   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:21:56.403673   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:21:56.403818   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:21:56.403990   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:21:56.404130   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:21:56.404272   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:21:56.499472   30211 command_runner.go:130] > apiVersion: v1
	I0103 19:21:56.499499   30211 command_runner.go:130] > data:
	I0103 19:21:56.499506   30211 command_runner.go:130] >   Corefile: |
	I0103 19:21:56.499513   30211 command_runner.go:130] >     .:53 {
	I0103 19:21:56.499519   30211 command_runner.go:130] >         errors
	I0103 19:21:56.499528   30211 command_runner.go:130] >         health {
	I0103 19:21:56.499536   30211 command_runner.go:130] >            lameduck 5s
	I0103 19:21:56.499543   30211 command_runner.go:130] >         }
	I0103 19:21:56.499549   30211 command_runner.go:130] >         ready
	I0103 19:21:56.499559   30211 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0103 19:21:56.499567   30211 command_runner.go:130] >            pods insecure
	I0103 19:21:56.499579   30211 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0103 19:21:56.499588   30211 command_runner.go:130] >            ttl 30
	I0103 19:21:56.499607   30211 command_runner.go:130] >         }
	I0103 19:21:56.499617   30211 command_runner.go:130] >         prometheus :9153
	I0103 19:21:56.499629   30211 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0103 19:21:56.499638   30211 command_runner.go:130] >            max_concurrent 1000
	I0103 19:21:56.499647   30211 command_runner.go:130] >         }
	I0103 19:21:56.499656   30211 command_runner.go:130] >         cache 30
	I0103 19:21:56.499662   30211 command_runner.go:130] >         loop
	I0103 19:21:56.499671   30211 command_runner.go:130] >         reload
	I0103 19:21:56.499678   30211 command_runner.go:130] >         loadbalance
	I0103 19:21:56.499686   30211 command_runner.go:130] >     }
	I0103 19:21:56.499694   30211 command_runner.go:130] > kind: ConfigMap
	I0103 19:21:56.499702   30211 command_runner.go:130] > metadata:
	I0103 19:21:56.499715   30211 command_runner.go:130] >   creationTimestamp: "2024-01-03T19:21:43Z"
	I0103 19:21:56.499725   30211 command_runner.go:130] >   name: coredns
	I0103 19:21:56.499735   30211 command_runner.go:130] >   namespace: kube-system
	I0103 19:21:56.499742   30211 command_runner.go:130] >   resourceVersion: "230"
	I0103 19:21:56.499753   30211 command_runner.go:130] >   uid: e65758c8-7a81-43f3-915e-38ae133a6536
	I0103 19:21:56.501190   30211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 19:21:56.572385   30211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:21:56.598374   30211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 19:21:56.842661   30211 round_trippers.go:463] GET https://192.168.39.191:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:21:56.842680   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:56.842688   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:56.842694   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:56.896715   30211 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I0103 19:21:56.896747   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:56.896757   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:56.896765   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:56.896773   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:56.896781   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:56.896790   30211 round_trippers.go:580]     Content-Length: 291
	I0103 19:21:56.896799   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:56 GMT
	I0103 19:21:56.896808   30211 round_trippers.go:580]     Audit-Id: 12ef8553-40ac-420f-9679-595728629829
	I0103 19:21:56.897039   30211 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e2317390-8a66-46be-8656-5adca86177ea","resourceVersion":"341","creationTimestamp":"2024-01-03T19:21:43Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0103 19:21:56.897151   30211 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484895" context rescaled to 1 replicas
	I0103 19:21:56.897197   30211 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:21:56.899470   30211 out.go:177] * Verifying Kubernetes components...
	I0103 19:21:56.900864   30211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:21:57.285029   30211 command_runner.go:130] > configmap/coredns replaced
	I0103 19:21:57.285073   30211 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0103 19:21:57.524097   30211 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0103 19:21:57.533266   30211 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0103 19:21:57.570946   30211 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0103 19:21:57.601231   30211 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0103 19:21:57.613159   30211 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0103 19:21:57.621906   30211 command_runner.go:130] > pod/storage-provisioner created
	I0103 19:21:57.624468   30211 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0103 19:21:57.624471   30211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.052041101s)
	I0103 19:21:57.624511   30211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026107096s)
	I0103 19:21:57.624589   30211 main.go:141] libmachine: Making call to close driver server
	I0103 19:21:57.624604   30211 main.go:141] libmachine: Making call to close driver server
	I0103 19:21:57.624612   30211 main.go:141] libmachine: (multinode-484895) Calling .Close
	I0103 19:21:57.624618   30211 main.go:141] libmachine: (multinode-484895) Calling .Close
	I0103 19:21:57.624902   30211 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:21:57.624921   30211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:21:57.624934   30211 main.go:141] libmachine: Making call to close driver server
	I0103 19:21:57.624942   30211 main.go:141] libmachine: (multinode-484895) Calling .Close
	I0103 19:21:57.625040   30211 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:21:57.625057   30211 main.go:141] libmachine: (multinode-484895) DBG | Closing plugin on server side
	I0103 19:21:57.625084   30211 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:21:57.625103   30211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:21:57.625115   30211 main.go:141] libmachine: Making call to close driver server
	I0103 19:21:57.625140   30211 main.go:141] libmachine: (multinode-484895) Calling .Close
	I0103 19:21:57.625368   30211 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:21:57.625396   30211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:21:57.625369   30211 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:21:57.625502   30211 round_trippers.go:463] GET https://192.168.39.191:8443/apis/storage.k8s.io/v1/storageclasses
	I0103 19:21:57.625520   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:57.625530   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:57.625539   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:57.625697   30211 main.go:141] libmachine: (multinode-484895) DBG | Closing plugin on server side
	I0103 19:21:57.625697   30211 node_ready.go:35] waiting up to 6m0s for node "multinode-484895" to be "Ready" ...
	I0103 19:21:57.625759   30211 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:21:57.625786   30211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:21:57.625788   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:21:57.625880   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:57.625893   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:57.625901   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:57.629135   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:21:57.629152   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:57.629159   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:57.629164   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:57.629169   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:57.629174   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:57.629180   30211 round_trippers.go:580]     Content-Length: 1273
	I0103 19:21:57.629185   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:57 GMT
	I0103 19:21:57.629193   30211 round_trippers.go:580]     Audit-Id: 70fe7bde-b557-4860-9c26-c19a6fee31f1
	I0103 19:21:57.629223   30211 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"367"},"items":[{"metadata":{"name":"standard","uid":"75de563a-9baa-47f0-ba92-c33ad52d1a60","resourceVersion":"360","creationTimestamp":"2024-01-03T19:21:57Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-03T19:21:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0103 19:21:57.629599   30211 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"75de563a-9baa-47f0-ba92-c33ad52d1a60","resourceVersion":"360","creationTimestamp":"2024-01-03T19:21:57Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-03T19:21:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0103 19:21:57.629646   30211 round_trippers.go:463] PUT https://192.168.39.191:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0103 19:21:57.629651   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:57.629659   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:57.629668   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:57.629681   30211 round_trippers.go:473]     Content-Type: application/json
	I0103 19:21:57.636255   30211 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0103 19:21:57.636279   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:57.636286   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:57.636292   30211 round_trippers.go:580]     Content-Length: 1220
	I0103 19:21:57.636297   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:57 GMT
	I0103 19:21:57.636303   30211 round_trippers.go:580]     Audit-Id: de89cc89-24c5-403d-b383-66ec15c2f467
	I0103 19:21:57.636308   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:57.636313   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:57.636320   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:57.636362   30211 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"75de563a-9baa-47f0-ba92-c33ad52d1a60","resourceVersion":"360","creationTimestamp":"2024-01-03T19:21:57Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-03T19:21:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0103 19:21:57.636506   30211 main.go:141] libmachine: Making call to close driver server
	I0103 19:21:57.636526   30211 main.go:141] libmachine: (multinode-484895) Calling .Close
	I0103 19:21:57.636835   30211 main.go:141] libmachine: (multinode-484895) DBG | Closing plugin on server side
	I0103 19:21:57.636865   30211 main.go:141] libmachine: Successfully made call to close driver server
	I0103 19:21:57.636882   30211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 19:21:57.638682   30211 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 19:21:57.637615   30211 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0103 19:21:57.640200   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:57.640214   30211 round_trippers.go:580]     Audit-Id: 21593ca6-3eba-45b1-99be-31ecd2a070d8
	I0103 19:21:57.640213   30211 addons.go:508] enable addons completed in 1.300039255s: enabled=[storage-provisioner default-storageclass]
	I0103 19:21:57.640224   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:57.640261   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:57.640272   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:57.640281   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:57.640292   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:57 GMT
	I0103 19:21:57.640461   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"308","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0103 19:21:58.125973   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:21:58.125999   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:58.126007   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:58.126012   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:58.128818   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:21:58.128843   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:58.128853   30211 round_trippers.go:580]     Audit-Id: 85e0d3ce-fa66-4de5-89c4-9184c86a10ea
	I0103 19:21:58.128861   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:58.128870   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:58.128877   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:58.128885   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:58.128897   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:58 GMT
	I0103 19:21:58.128987   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"308","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0103 19:21:58.626596   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:21:58.626625   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:58.626638   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:58.626646   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:58.630135   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:21:58.630165   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:58.630206   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:58.630219   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:58.630227   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:58.630237   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:58.630250   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:58 GMT
	I0103 19:21:58.630258   30211 round_trippers.go:580]     Audit-Id: d150a1ea-a525-4ac6-ac86-6263036e5a62
	I0103 19:21:58.631097   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"308","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0103 19:21:59.126823   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:21:59.126848   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:59.126856   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:59.126863   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:59.129480   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:21:59.129502   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:59.129509   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:59.129535   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:59.129540   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:59.129545   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:59.129553   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:59 GMT
	I0103 19:21:59.129562   30211 round_trippers.go:580]     Audit-Id: 65d5e346-e734-48a2-b24b-d747b01e2880
	I0103 19:21:59.129686   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"308","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0103 19:21:59.626239   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:21:59.626271   30211 round_trippers.go:469] Request Headers:
	I0103 19:21:59.626283   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:21:59.626293   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:21:59.629811   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:21:59.629831   30211 round_trippers.go:577] Response Headers:
	I0103 19:21:59.629838   30211 round_trippers.go:580]     Audit-Id: 2815a846-e32c-4d17-9239-0a2e51edb88e
	I0103 19:21:59.629843   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:21:59.629849   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:21:59.629857   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:21:59.629865   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:21:59.629874   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:21:59 GMT
	I0103 19:21:59.630040   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"308","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0103 19:21:59.630415   30211 node_ready.go:58] node "multinode-484895" has status "Ready":"False"
	I0103 19:22:00.126647   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:00.126672   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:00.126682   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:00.126696   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:00.129531   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:00.129561   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:00.129573   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:00.129585   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:00.129592   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:00 GMT
	I0103 19:22:00.129601   30211 round_trippers.go:580]     Audit-Id: 09397ef2-3ead-4cb6-af1d-4c92d906fd7f
	I0103 19:22:00.129609   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:00.129616   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:00.129716   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"308","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0103 19:22:00.626434   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:00.626457   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:00.626465   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:00.626471   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:00.629414   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:00.629443   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:00.629451   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:00.629457   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:00.629462   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:00.629468   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:00 GMT
	I0103 19:22:00.629473   30211 round_trippers.go:580]     Audit-Id: 9e5b2f97-e078-4adf-8630-f356e864593b
	I0103 19:22:00.629478   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:00.629602   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"308","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0103 19:22:01.126120   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:01.126147   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:01.126154   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:01.126161   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:01.128980   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:01.129000   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:01.129007   30211 round_trippers.go:580]     Audit-Id: a05fc416-6e7f-4df9-a69f-be5cdaec54f7
	I0103 19:22:01.129013   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:01.129018   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:01.129023   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:01.129030   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:01.129035   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:01 GMT
	I0103 19:22:01.129236   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"308","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0103 19:22:01.626821   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:01.626871   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:01.626882   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:01.626888   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:01.629897   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:01.629929   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:01.629940   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:01 GMT
	I0103 19:22:01.629949   30211 round_trippers.go:580]     Audit-Id: ac87e8a5-12f6-4e68-80d7-3b21454259c6
	I0103 19:22:01.629957   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:01.629965   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:01.629972   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:01.629980   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:01.630197   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:01.630514   30211 node_ready.go:49] node "multinode-484895" has status "Ready":"True"
	I0103 19:22:01.630543   30211 node_ready.go:38] duration metric: took 4.004812608s waiting for node "multinode-484895" to be "Ready" ...
	I0103 19:22:01.630553   30211 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:22:01.630628   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:22:01.630637   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:01.630644   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:01.630650   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:01.639608   30211 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0103 19:22:01.639638   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:01.639650   30211 round_trippers.go:580]     Audit-Id: 70c00f0e-0408-4446-9bd9-e97360283bd3
	I0103 19:22:01.639659   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:01.639667   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:01.639673   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:01.639679   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:01.639684   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:01 GMT
	I0103 19:22:01.641400   30211 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"389"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"387","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54593 chars]
	I0103 19:22:01.645898   30211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:01.645986   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:22:01.645994   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:01.646004   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:01.646018   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:01.655306   30211 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0103 19:22:01.655331   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:01.655338   30211 round_trippers.go:580]     Audit-Id: c5e3d770-bf72-438b-9566-151b6cec1321
	I0103 19:22:01.655344   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:01.655349   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:01.655354   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:01.655359   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:01.655364   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:01 GMT
	I0103 19:22:01.655470   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"387","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0103 19:22:01.655928   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:01.655942   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:01.655949   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:01.655955   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:01.660484   30211 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:22:01.660512   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:01.660519   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:01.660525   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:01 GMT
	I0103 19:22:01.660540   30211 round_trippers.go:580]     Audit-Id: c7e86f69-4770-47c5-a641-6b6cfef83a5e
	I0103 19:22:01.660547   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:01.660555   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:01.660562   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:01.660727   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:02.146528   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:22:02.146553   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:02.146562   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:02.146568   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:02.149575   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:02.149601   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:02.149611   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:02.149619   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:02.149626   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:02.149634   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:02.149641   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:02 GMT
	I0103 19:22:02.149648   30211 round_trippers.go:580]     Audit-Id: 1fb8df02-197c-4a07-aee0-4d7db15d7787
	I0103 19:22:02.149795   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"387","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0103 19:22:02.150266   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:02.150280   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:02.150287   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:02.150294   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:02.152372   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:02.152387   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:02.152393   30211 round_trippers.go:580]     Audit-Id: 2179648f-191e-42ba-8472-f0942a491110
	I0103 19:22:02.152399   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:02.152404   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:02.152410   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:02.152415   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:02.152421   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:02 GMT
	I0103 19:22:02.152567   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:02.646203   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:22:02.646233   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:02.646241   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:02.646248   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:02.649245   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:02.649263   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:02.649270   30211 round_trippers.go:580]     Audit-Id: 4997a284-1b33-4adb-b525-bf504b637e3b
	I0103 19:22:02.649276   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:02.649283   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:02.649290   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:02.649298   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:02.649310   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:02 GMT
	I0103 19:22:02.649495   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"387","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0103 19:22:02.649975   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:02.649990   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:02.649998   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:02.650004   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:02.652134   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:02.652150   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:02.652157   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:02.652163   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:02 GMT
	I0103 19:22:02.652168   30211 round_trippers.go:580]     Audit-Id: 6215f1dd-69e5-4366-a741-ca48d470ebf7
	I0103 19:22:02.652173   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:02.652178   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:02.652183   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:02.652342   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:03.147064   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:22:03.147096   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:03.147106   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:03.147134   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:03.150489   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:03.150540   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:03.150553   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:03.150562   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:03 GMT
	I0103 19:22:03.150569   30211 round_trippers.go:580]     Audit-Id: a8ef4a1f-0bf2-4e5a-861b-c49c53a6517e
	I0103 19:22:03.150577   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:03.150586   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:03.150599   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:03.150730   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"387","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0103 19:22:03.151197   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:03.151213   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:03.151221   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:03.151226   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:03.153647   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:03.153664   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:03.153672   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:03.153678   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:03.153683   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:03 GMT
	I0103 19:22:03.153688   30211 round_trippers.go:580]     Audit-Id: 762da924-1de3-48a4-a26f-b9314cb8c83b
	I0103 19:22:03.153693   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:03.153698   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:03.153864   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:03.646506   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:22:03.646546   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:03.646558   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:03.646568   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:03.649312   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:03.649332   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:03.649339   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:03.649344   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:03.649349   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:03.649357   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:03.649366   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:03 GMT
	I0103 19:22:03.649373   30211 round_trippers.go:580]     Audit-Id: a6875343-3328-4f97-bc30-9b026191af0a
	I0103 19:22:03.649504   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"400","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0103 19:22:03.649917   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:03.649929   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:03.649936   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:03.649941   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:03.652165   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:03.652186   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:03.652192   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:03.652197   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:03.652202   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:03.652207   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:03 GMT
	I0103 19:22:03.652212   30211 round_trippers.go:580]     Audit-Id: d80e59be-fde7-434b-ac67-50d07a9b3fa6
	I0103 19:22:03.652218   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:03.652324   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:03.652588   30211 pod_ready.go:92] pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:03.652601   30211 pod_ready.go:81] duration metric: took 2.006675081s waiting for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:03.652619   30211 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:03.652665   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484895
	I0103 19:22:03.652673   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:03.652679   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:03.652685   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:03.654572   30211 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:22:03.654591   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:03.654597   30211 round_trippers.go:580]     Audit-Id: a012f620-410a-4401-8f42-28e1e9c08c48
	I0103 19:22:03.654603   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:03.654608   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:03.654613   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:03.654618   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:03.654623   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:03 GMT
	I0103 19:22:03.654763   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484895","namespace":"kube-system","uid":"2b5f9dc7-2d61-4968-9b9a-cfc029c9522b","resourceVersion":"358","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.191:2379","kubernetes.io/config.hash":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.mirror":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.seen":"2024-01-03T19:21:43.948366778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0103 19:22:03.655091   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:03.655102   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:03.655109   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:03.655115   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:03.657058   30211 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:22:03.657075   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:03.657082   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:03.657088   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:03.657093   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:03 GMT
	I0103 19:22:03.657098   30211 round_trippers.go:580]     Audit-Id: fe354076-f056-432b-84e0-df10ce4fff6e
	I0103 19:22:03.657104   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:03.657109   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:03.657272   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:03.657543   30211 pod_ready.go:92] pod "etcd-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:03.657557   30211 pod_ready.go:81] duration metric: took 4.932398ms waiting for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:03.657568   30211 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:03.657616   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484895
	I0103 19:22:03.657623   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:03.657629   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:03.657636   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:03.659573   30211 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:22:03.659589   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:03.659596   30211 round_trippers.go:580]     Audit-Id: 014edc66-3868-411c-bd5b-d4d582928ab4
	I0103 19:22:03.659603   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:03.659608   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:03.659613   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:03.659619   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:03.659624   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:03 GMT
	I0103 19:22:03.659753   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484895","namespace":"kube-system","uid":"f9f36416-b761-4534-8e09-bc3c94813149","resourceVersion":"313","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.191:8443","kubernetes.io/config.hash":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.mirror":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.seen":"2024-01-03T19:21:43.948370781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7614 chars]
	I0103 19:22:03.660098   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:03.660109   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:03.660115   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:03.660121   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:03.661986   30211 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:22:03.662001   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:03.662007   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:03.662012   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:03.662017   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:03.662023   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:03 GMT
	I0103 19:22:03.662030   30211 round_trippers.go:580]     Audit-Id: 54d4f1d7-4988-4f8d-9d45-3f00fb531683
	I0103 19:22:03.662036   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:03.662284   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:04.157937   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484895
	I0103 19:22:04.157966   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.157974   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.157980   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.160285   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:04.160308   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.160314   30211 round_trippers.go:580]     Audit-Id: d5bbf2f8-3f03-4182-8a36-f90025da612f
	I0103 19:22:04.160320   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.160324   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.160330   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.160335   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.160340   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.160485   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484895","namespace":"kube-system","uid":"f9f36416-b761-4534-8e09-bc3c94813149","resourceVersion":"406","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.191:8443","kubernetes.io/config.hash":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.mirror":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.seen":"2024-01-03T19:21:43.948370781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0103 19:22:04.160892   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:04.160903   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.160910   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.160916   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.163061   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:04.163079   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.163085   30211 round_trippers.go:580]     Audit-Id: 9c14a4e5-989a-482b-8acf-040fd25303ea
	I0103 19:22:04.163090   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.163095   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.163100   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.163105   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.163110   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.163253   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:04.163542   30211 pod_ready.go:92] pod "kube-apiserver-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:04.163557   30211 pod_ready.go:81] duration metric: took 505.981303ms waiting for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:04.163566   30211 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:04.163610   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:22:04.163618   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.163624   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.163630   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.165680   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:04.165700   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.165710   30211 round_trippers.go:580]     Audit-Id: f2f5f678-fa79-4819-8629-f3736fb0ecf7
	I0103 19:22:04.165719   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.165734   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.165742   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.165750   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.165759   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.166029   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484895","namespace":"kube-system","uid":"a04de258-1f92-4ac7-8f30-18ad9ebb6d40","resourceVersion":"407","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.mirror":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.seen":"2024-01-03T19:21:43.948371847Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0103 19:22:04.166377   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:04.166389   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.166396   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.166401   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.169147   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:04.169168   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.169178   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.169186   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.169193   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.169201   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.169209   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.169218   30211 round_trippers.go:580]     Audit-Id: e0ea77ef-1771-44b9-8d9d-abc025c8c733
	I0103 19:22:04.169397   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:04.169664   30211 pod_ready.go:92] pod "kube-controller-manager-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:04.169679   30211 pod_ready.go:81] duration metric: took 6.107792ms waiting for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:04.169687   30211 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:04.169739   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:22:04.169747   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.169753   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.169759   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.179548   30211 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0103 19:22:04.179573   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.179583   30211 round_trippers.go:580]     Audit-Id: 20c0c1d4-a018-44fb-ac2e-2f6623da3bda
	I0103 19:22:04.179591   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.179598   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.179605   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.179612   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.179622   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.180045   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp9s2","generateName":"kube-proxy-","namespace":"kube-system","uid":"728b1db9-b145-4ad3-b366-7fd8306d7a2a","resourceVersion":"373","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0103 19:22:04.180408   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:04.180422   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.180433   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.180441   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.182830   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:04.182849   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.182859   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.182868   30211 round_trippers.go:580]     Audit-Id: c7a5778d-b457-45c0-bf99-2c6aa5ec56d2
	I0103 19:22:04.182876   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.182888   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.182896   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.182904   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.183047   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:04.183414   30211 pod_ready.go:92] pod "kube-proxy-tp9s2" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:04.183434   30211 pod_ready.go:81] duration metric: took 13.740936ms waiting for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:04.183447   30211 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:04.246724   30211 request.go:629] Waited for 63.215986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:22:04.246802   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:22:04.246809   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.246821   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.246830   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.249595   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:04.249610   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.249618   30211 round_trippers.go:580]     Audit-Id: 69bf1b93-7d73-4fd6-9594-4601609ed790
	I0103 19:22:04.249627   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.249635   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.249645   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.249654   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.249664   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.249885   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484895","namespace":"kube-system","uid":"f981e6c0-1f4a-44ed-b043-c69ef28b4fa5","resourceVersion":"405","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.mirror":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.seen":"2024-01-03T19:21:43.948372698Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0103 19:22:04.446653   30211 request.go:629] Waited for 196.292744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:04.446749   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:04.446757   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.446767   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.446780   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.449226   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:04.449246   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.449253   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.449258   30211 round_trippers.go:580]     Audit-Id: 3e95ba63-17e3-4501-af10-56b7e3c8b408
	I0103 19:22:04.449263   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.449271   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.449279   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.449288   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.449424   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:04.449833   30211 pod_ready.go:92] pod "kube-scheduler-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:04.449855   30211 pod_ready.go:81] duration metric: took 266.39632ms waiting for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:04.449867   30211 pod_ready.go:38] duration metric: took 2.819292533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:22:04.449884   30211 api_server.go:52] waiting for apiserver process to appear ...
	I0103 19:22:04.449947   30211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:22:04.463609   30211 command_runner.go:130] > 1082
	I0103 19:22:04.463649   30211 api_server.go:72] duration metric: took 7.566417242s to wait for apiserver process to appear ...
	I0103 19:22:04.463660   30211 api_server.go:88] waiting for apiserver healthz status ...
	I0103 19:22:04.463682   30211 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:22:04.468530   30211 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I0103 19:22:04.468592   30211 round_trippers.go:463] GET https://192.168.39.191:8443/version
	I0103 19:22:04.468600   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.468608   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.468615   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.469647   30211 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:22:04.469669   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.469679   30211 round_trippers.go:580]     Content-Length: 264
	I0103 19:22:04.469691   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.469699   30211 round_trippers.go:580]     Audit-Id: aa377cc0-1320-4c01-917d-7ec29139cdf2
	I0103 19:22:04.469708   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.469717   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.469729   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.469737   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.469790   30211 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0103 19:22:04.469881   30211 api_server.go:141] control plane version: v1.28.4
	I0103 19:22:04.469904   30211 api_server.go:131] duration metric: took 6.237484ms to wait for apiserver health ...
	I0103 19:22:04.469915   30211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 19:22:04.647400   30211 request.go:629] Waited for 177.389648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:22:04.647475   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:22:04.647480   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.647489   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.647498   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.651262   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:04.651288   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.651298   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.651305   30211 round_trippers.go:580]     Audit-Id: 78b3195a-ded3-40fe-ac75-bd2a089c5775
	I0103 19:22:04.651315   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.651325   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.651335   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.651347   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.652138   30211 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"400","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0103 19:22:04.653751   30211 system_pods.go:59] 8 kube-system pods found
	I0103 19:22:04.653783   30211 system_pods.go:61] "coredns-5dd5756b68-wzsqb" [9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa] Running
	I0103 19:22:04.653791   30211 system_pods.go:61] "etcd-multinode-484895" [2b5f9dc7-2d61-4968-9b9a-cfc029c9522b] Running
	I0103 19:22:04.653796   30211 system_pods.go:61] "kindnet-gqgk2" [8d4f9028-52ad-44dd-83be-0bb7cc590b7f] Running
	I0103 19:22:04.653801   30211 system_pods.go:61] "kube-apiserver-multinode-484895" [f9f36416-b761-4534-8e09-bc3c94813149] Running
	I0103 19:22:04.653805   30211 system_pods.go:61] "kube-controller-manager-multinode-484895" [a04de258-1f92-4ac7-8f30-18ad9ebb6d40] Running
	I0103 19:22:04.653809   30211 system_pods.go:61] "kube-proxy-tp9s2" [728b1db9-b145-4ad3-b366-7fd8306d7a2a] Running
	I0103 19:22:04.653815   30211 system_pods.go:61] "kube-scheduler-multinode-484895" [f981e6c0-1f4a-44ed-b043-c69ef28b4fa5] Running
	I0103 19:22:04.653820   30211 system_pods.go:61] "storage-provisioner" [82edd1c3-f361-4f86-8d59-8b89193d7a31] Running
	I0103 19:22:04.653828   30211 system_pods.go:74] duration metric: took 183.904089ms to wait for pod list to return data ...
	I0103 19:22:04.653837   30211 default_sa.go:34] waiting for default service account to be created ...
	I0103 19:22:04.847284   30211 request.go:629] Waited for 193.380035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/default/serviceaccounts
	I0103 19:22:04.847353   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/default/serviceaccounts
	I0103 19:22:04.847366   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:04.847376   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:04.847390   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:04.850271   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:04.850292   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:04.850299   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:04.850304   30211 round_trippers.go:580]     Content-Length: 261
	I0103 19:22:04.850309   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:04 GMT
	I0103 19:22:04.850314   30211 round_trippers.go:580]     Audit-Id: c36848a3-2773-4c9e-a19a-5f46286c0717
	I0103 19:22:04.850319   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:04.850324   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:04.850329   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:04.850349   30211 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"47bf7b55-c706-4355-a436-e9ecf18d06f2","resourceVersion":"306","creationTimestamp":"2024-01-03T19:21:56Z"}}]}
	I0103 19:22:04.850505   30211 default_sa.go:45] found service account: "default"
	I0103 19:22:04.850532   30211 default_sa.go:55] duration metric: took 196.685999ms for default service account to be created ...
	I0103 19:22:04.850543   30211 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 19:22:05.046814   30211 request.go:629] Waited for 196.214778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:22:05.046882   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:22:05.046888   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:05.046896   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:05.046903   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:05.050797   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:05.050828   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:05.050839   30211 round_trippers.go:580]     Audit-Id: db7cbb2b-cb9a-46ae-a474-ab7ed1f39c96
	I0103 19:22:05.050846   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:05.050852   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:05.050858   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:05.050864   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:05.050872   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:05 GMT
	I0103 19:22:05.051644   30211 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"400","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0103 19:22:05.053604   30211 system_pods.go:86] 8 kube-system pods found
	I0103 19:22:05.053629   30211 system_pods.go:89] "coredns-5dd5756b68-wzsqb" [9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa] Running
	I0103 19:22:05.053637   30211 system_pods.go:89] "etcd-multinode-484895" [2b5f9dc7-2d61-4968-9b9a-cfc029c9522b] Running
	I0103 19:22:05.053642   30211 system_pods.go:89] "kindnet-gqgk2" [8d4f9028-52ad-44dd-83be-0bb7cc590b7f] Running
	I0103 19:22:05.053647   30211 system_pods.go:89] "kube-apiserver-multinode-484895" [f9f36416-b761-4534-8e09-bc3c94813149] Running
	I0103 19:22:05.053651   30211 system_pods.go:89] "kube-controller-manager-multinode-484895" [a04de258-1f92-4ac7-8f30-18ad9ebb6d40] Running
	I0103 19:22:05.053655   30211 system_pods.go:89] "kube-proxy-tp9s2" [728b1db9-b145-4ad3-b366-7fd8306d7a2a] Running
	I0103 19:22:05.053658   30211 system_pods.go:89] "kube-scheduler-multinode-484895" [f981e6c0-1f4a-44ed-b043-c69ef28b4fa5] Running
	I0103 19:22:05.053662   30211 system_pods.go:89] "storage-provisioner" [82edd1c3-f361-4f86-8d59-8b89193d7a31] Running
	I0103 19:22:05.053668   30211 system_pods.go:126] duration metric: took 203.116745ms to wait for k8s-apps to be running ...
	I0103 19:22:05.053675   30211 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:22:05.053717   30211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:22:05.066173   30211 system_svc.go:56] duration metric: took 12.490371ms WaitForService to wait for kubelet.
	I0103 19:22:05.066199   30211 kubeadm.go:581] duration metric: took 8.168968732s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:22:05.066217   30211 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:22:05.246776   30211 request.go:629] Waited for 180.488848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes
	I0103 19:22:05.246859   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes
	I0103 19:22:05.246865   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:05.246873   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:05.246884   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:05.250482   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:05.250506   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:05.250517   30211 round_trippers.go:580]     Audit-Id: 325216fc-3934-4a42-b97f-02786cebfc50
	I0103 19:22:05.250544   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:05.250553   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:05.250562   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:05.250569   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:05.250578   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:05 GMT
	I0103 19:22:05.251169   30211 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0103 19:22:05.251629   30211 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:22:05.251662   30211 node_conditions.go:123] node cpu capacity is 2
	I0103 19:22:05.251674   30211 node_conditions.go:105] duration metric: took 185.451965ms to run NodePressure ...
	I0103 19:22:05.251688   30211 start.go:228] waiting for startup goroutines ...
	I0103 19:22:05.251698   30211 start.go:233] waiting for cluster config update ...
	I0103 19:22:05.251709   30211 start.go:242] writing updated cluster config ...
	I0103 19:22:05.253619   30211 out.go:177] 
	I0103 19:22:05.255742   30211 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:22:05.255817   30211 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:22:05.257420   30211 out.go:177] * Starting worker node multinode-484895-m02 in cluster multinode-484895
	I0103 19:22:05.258654   30211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:22:05.258677   30211 cache.go:56] Caching tarball of preloaded images
	I0103 19:22:05.258783   30211 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:22:05.258798   30211 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:22:05.258859   30211 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:22:05.259011   30211 start.go:365] acquiring machines lock for multinode-484895-m02: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:22:05.259057   30211 start.go:369] acquired machines lock for "multinode-484895-m02" in 26.444µs
	I0103 19:22:05.259079   30211 start.go:93] Provisioning new machine with config: &{Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:22:05.259141   30211 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0103 19:22:05.261138   30211 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0103 19:22:05.261212   30211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:22:05.261250   30211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:22:05.276205   30211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I0103 19:22:05.276586   30211 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:22:05.277035   30211 main.go:141] libmachine: Using API Version  1
	I0103 19:22:05.277061   30211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:22:05.277388   30211 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:22:05.277588   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetMachineName
	I0103 19:22:05.277733   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:22:05.277899   30211 start.go:159] libmachine.API.Create for "multinode-484895" (driver="kvm2")
	I0103 19:22:05.277937   30211 client.go:168] LocalClient.Create starting
	I0103 19:22:05.277975   30211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem
	I0103 19:22:05.278014   30211 main.go:141] libmachine: Decoding PEM data...
	I0103 19:22:05.278036   30211 main.go:141] libmachine: Parsing certificate...
	I0103 19:22:05.278106   30211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem
	I0103 19:22:05.278136   30211 main.go:141] libmachine: Decoding PEM data...
	I0103 19:22:05.278151   30211 main.go:141] libmachine: Parsing certificate...
	I0103 19:22:05.278169   30211 main.go:141] libmachine: Running pre-create checks...
	I0103 19:22:05.278178   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .PreCreateCheck
	I0103 19:22:05.278356   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetConfigRaw
	I0103 19:22:05.278734   30211 main.go:141] libmachine: Creating machine...
	I0103 19:22:05.278749   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .Create
	I0103 19:22:05.278870   30211 main.go:141] libmachine: (multinode-484895-m02) Creating KVM machine...
	I0103 19:22:05.280242   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found existing default KVM network
	I0103 19:22:05.280408   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found existing private KVM network mk-multinode-484895
	I0103 19:22:05.280547   30211 main.go:141] libmachine: (multinode-484895-m02) Setting up store path in /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02 ...
	I0103 19:22:05.280575   30211 main.go:141] libmachine: (multinode-484895-m02) Building disk image from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 19:22:05.280616   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:05.280518   30558 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:22:05.280706   30211 main.go:141] libmachine: (multinode-484895-m02) Downloading /home/jenkins/minikube-integration/17885-9609/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0103 19:22:05.483215   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:05.483040   30558 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa...
	I0103 19:22:05.787118   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:05.786969   30558 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/multinode-484895-m02.rawdisk...
	I0103 19:22:05.787157   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Writing magic tar header
	I0103 19:22:05.787176   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Writing SSH key tar header
	I0103 19:22:05.787188   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:05.787088   30558 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02 ...
	I0103 19:22:05.787211   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02
	I0103 19:22:05.787225   30211 main.go:141] libmachine: (multinode-484895-m02) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02 (perms=drwx------)
	I0103 19:22:05.787240   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines
	I0103 19:22:05.787261   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:22:05.787285   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609
	I0103 19:22:05.787304   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0103 19:22:05.787319   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Checking permissions on dir: /home/jenkins
	I0103 19:22:05.787334   30211 main.go:141] libmachine: (multinode-484895-m02) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines (perms=drwxr-xr-x)
	I0103 19:22:05.787350   30211 main.go:141] libmachine: (multinode-484895-m02) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube (perms=drwxr-xr-x)
	I0103 19:22:05.787366   30211 main.go:141] libmachine: (multinode-484895-m02) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609 (perms=drwxrwxr-x)
	I0103 19:22:05.787385   30211 main.go:141] libmachine: (multinode-484895-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0103 19:22:05.787400   30211 main.go:141] libmachine: (multinode-484895-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0103 19:22:05.787417   30211 main.go:141] libmachine: (multinode-484895-m02) Creating domain...
	I0103 19:22:05.787432   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Checking permissions on dir: /home
	I0103 19:22:05.787462   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Skipping /home - not owner
	I0103 19:22:05.788392   30211 main.go:141] libmachine: (multinode-484895-m02) define libvirt domain using xml: 
	I0103 19:22:05.788424   30211 main.go:141] libmachine: (multinode-484895-m02) <domain type='kvm'>
	I0103 19:22:05.788432   30211 main.go:141] libmachine: (multinode-484895-m02)   <name>multinode-484895-m02</name>
	I0103 19:22:05.788440   30211 main.go:141] libmachine: (multinode-484895-m02)   <memory unit='MiB'>2200</memory>
	I0103 19:22:05.788450   30211 main.go:141] libmachine: (multinode-484895-m02)   <vcpu>2</vcpu>
	I0103 19:22:05.788463   30211 main.go:141] libmachine: (multinode-484895-m02)   <features>
	I0103 19:22:05.788475   30211 main.go:141] libmachine: (multinode-484895-m02)     <acpi/>
	I0103 19:22:05.788491   30211 main.go:141] libmachine: (multinode-484895-m02)     <apic/>
	I0103 19:22:05.788504   30211 main.go:141] libmachine: (multinode-484895-m02)     <pae/>
	I0103 19:22:05.788516   30211 main.go:141] libmachine: (multinode-484895-m02)     
	I0103 19:22:05.788526   30211 main.go:141] libmachine: (multinode-484895-m02)   </features>
	I0103 19:22:05.788538   30211 main.go:141] libmachine: (multinode-484895-m02)   <cpu mode='host-passthrough'>
	I0103 19:22:05.788551   30211 main.go:141] libmachine: (multinode-484895-m02)   
	I0103 19:22:05.788564   30211 main.go:141] libmachine: (multinode-484895-m02)   </cpu>
	I0103 19:22:05.788591   30211 main.go:141] libmachine: (multinode-484895-m02)   <os>
	I0103 19:22:05.788615   30211 main.go:141] libmachine: (multinode-484895-m02)     <type>hvm</type>
	I0103 19:22:05.788625   30211 main.go:141] libmachine: (multinode-484895-m02)     <boot dev='cdrom'/>
	I0103 19:22:05.788646   30211 main.go:141] libmachine: (multinode-484895-m02)     <boot dev='hd'/>
	I0103 19:22:05.788654   30211 main.go:141] libmachine: (multinode-484895-m02)     <bootmenu enable='no'/>
	I0103 19:22:05.788659   30211 main.go:141] libmachine: (multinode-484895-m02)   </os>
	I0103 19:22:05.788665   30211 main.go:141] libmachine: (multinode-484895-m02)   <devices>
	I0103 19:22:05.788671   30211 main.go:141] libmachine: (multinode-484895-m02)     <disk type='file' device='cdrom'>
	I0103 19:22:05.788692   30211 main.go:141] libmachine: (multinode-484895-m02)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/boot2docker.iso'/>
	I0103 19:22:05.788698   30211 main.go:141] libmachine: (multinode-484895-m02)       <target dev='hdc' bus='scsi'/>
	I0103 19:22:05.788704   30211 main.go:141] libmachine: (multinode-484895-m02)       <readonly/>
	I0103 19:22:05.788710   30211 main.go:141] libmachine: (multinode-484895-m02)     </disk>
	I0103 19:22:05.788717   30211 main.go:141] libmachine: (multinode-484895-m02)     <disk type='file' device='disk'>
	I0103 19:22:05.788723   30211 main.go:141] libmachine: (multinode-484895-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0103 19:22:05.788733   30211 main.go:141] libmachine: (multinode-484895-m02)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/multinode-484895-m02.rawdisk'/>
	I0103 19:22:05.788742   30211 main.go:141] libmachine: (multinode-484895-m02)       <target dev='hda' bus='virtio'/>
	I0103 19:22:05.788748   30211 main.go:141] libmachine: (multinode-484895-m02)     </disk>
	I0103 19:22:05.788762   30211 main.go:141] libmachine: (multinode-484895-m02)     <interface type='network'>
	I0103 19:22:05.788771   30211 main.go:141] libmachine: (multinode-484895-m02)       <source network='mk-multinode-484895'/>
	I0103 19:22:05.788777   30211 main.go:141] libmachine: (multinode-484895-m02)       <model type='virtio'/>
	I0103 19:22:05.788789   30211 main.go:141] libmachine: (multinode-484895-m02)     </interface>
	I0103 19:22:05.788800   30211 main.go:141] libmachine: (multinode-484895-m02)     <interface type='network'>
	I0103 19:22:05.788814   30211 main.go:141] libmachine: (multinode-484895-m02)       <source network='default'/>
	I0103 19:22:05.788826   30211 main.go:141] libmachine: (multinode-484895-m02)       <model type='virtio'/>
	I0103 19:22:05.788833   30211 main.go:141] libmachine: (multinode-484895-m02)     </interface>
	I0103 19:22:05.788839   30211 main.go:141] libmachine: (multinode-484895-m02)     <serial type='pty'>
	I0103 19:22:05.788846   30211 main.go:141] libmachine: (multinode-484895-m02)       <target port='0'/>
	I0103 19:22:05.788854   30211 main.go:141] libmachine: (multinode-484895-m02)     </serial>
	I0103 19:22:05.788860   30211 main.go:141] libmachine: (multinode-484895-m02)     <console type='pty'>
	I0103 19:22:05.788868   30211 main.go:141] libmachine: (multinode-484895-m02)       <target type='serial' port='0'/>
	I0103 19:22:05.788874   30211 main.go:141] libmachine: (multinode-484895-m02)     </console>
	I0103 19:22:05.788880   30211 main.go:141] libmachine: (multinode-484895-m02)     <rng model='virtio'>
	I0103 19:22:05.788891   30211 main.go:141] libmachine: (multinode-484895-m02)       <backend model='random'>/dev/random</backend>
	I0103 19:22:05.788908   30211 main.go:141] libmachine: (multinode-484895-m02)     </rng>
	I0103 19:22:05.788920   30211 main.go:141] libmachine: (multinode-484895-m02)     
	I0103 19:22:05.788933   30211 main.go:141] libmachine: (multinode-484895-m02)     
	I0103 19:22:05.788942   30211 main.go:141] libmachine: (multinode-484895-m02)   </devices>
	I0103 19:22:05.788947   30211 main.go:141] libmachine: (multinode-484895-m02) </domain>
	I0103 19:22:05.788960   30211 main.go:141] libmachine: (multinode-484895-m02) 
	I0103 19:22:05.796083   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:88:f1:3d in network default
	I0103 19:22:05.796758   30211 main.go:141] libmachine: (multinode-484895-m02) Ensuring networks are active...
	I0103 19:22:05.796778   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:05.797661   30211 main.go:141] libmachine: (multinode-484895-m02) Ensuring network default is active
	I0103 19:22:05.798060   30211 main.go:141] libmachine: (multinode-484895-m02) Ensuring network mk-multinode-484895 is active
	I0103 19:22:05.798515   30211 main.go:141] libmachine: (multinode-484895-m02) Getting domain xml...
	I0103 19:22:05.799499   30211 main.go:141] libmachine: (multinode-484895-m02) Creating domain...
	I0103 19:22:07.064416   30211 main.go:141] libmachine: (multinode-484895-m02) Waiting to get IP...
	I0103 19:22:07.065188   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:07.065644   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:07.065675   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:07.065600   30558 retry.go:31] will retry after 188.216455ms: waiting for machine to come up
	I0103 19:22:07.255043   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:07.255435   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:07.255456   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:07.255403   30558 retry.go:31] will retry after 267.232133ms: waiting for machine to come up
	I0103 19:22:07.523852   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:07.524315   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:07.524344   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:07.524265   30558 retry.go:31] will retry after 347.592492ms: waiting for machine to come up
	I0103 19:22:07.873942   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:07.874414   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:07.874441   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:07.874353   30558 retry.go:31] will retry after 546.650184ms: waiting for machine to come up
	I0103 19:22:08.423121   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:08.423535   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:08.423557   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:08.423497   30558 retry.go:31] will retry after 638.704496ms: waiting for machine to come up
	I0103 19:22:09.063280   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:09.063722   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:09.063750   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:09.063680   30558 retry.go:31] will retry after 861.11711ms: waiting for machine to come up
	I0103 19:22:09.926798   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:09.927176   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:09.927205   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:09.927104   30558 retry.go:31] will retry after 991.40661ms: waiting for machine to come up
	I0103 19:22:10.919510   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:10.919981   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:10.920025   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:10.919927   30558 retry.go:31] will retry after 1.015416221s: waiting for machine to come up
	I0103 19:22:11.937009   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:11.937444   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:11.937475   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:11.937383   30558 retry.go:31] will retry after 1.48236242s: waiting for machine to come up
	I0103 19:22:13.422113   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:13.422622   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:13.422647   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:13.422572   30558 retry.go:31] will retry after 1.763295403s: waiting for machine to come up
	I0103 19:22:15.187502   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:15.187989   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:15.188011   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:15.187966   30558 retry.go:31] will retry after 1.853745337s: waiting for machine to come up
	I0103 19:22:17.044307   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:17.044816   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:17.044844   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:17.044764   30558 retry.go:31] will retry after 2.641452898s: waiting for machine to come up
	I0103 19:22:19.687794   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:19.688250   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:19.688274   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:19.688203   30558 retry.go:31] will retry after 2.92739091s: waiting for machine to come up
	I0103 19:22:22.618877   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:22.619329   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find current IP address of domain multinode-484895-m02 in network mk-multinode-484895
	I0103 19:22:22.619352   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | I0103 19:22:22.619280   30558 retry.go:31] will retry after 3.692835962s: waiting for machine to come up
	I0103 19:22:26.313604   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.314038   30211 main.go:141] libmachine: (multinode-484895-m02) Found IP for machine: 192.168.39.86
	I0103 19:22:26.314060   30211 main.go:141] libmachine: (multinode-484895-m02) Reserving static IP address...
	I0103 19:22:26.314071   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.314448   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | unable to find host DHCP lease matching {name: "multinode-484895-m02", mac: "52:54:00:b5:0c:0f", ip: "192.168.39.86"} in network mk-multinode-484895
	I0103 19:22:26.389849   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Getting to WaitForSSH function...
	I0103 19:22:26.389876   30211 main.go:141] libmachine: (multinode-484895-m02) Reserved static IP address: 192.168.39.86
	I0103 19:22:26.389886   30211 main.go:141] libmachine: (multinode-484895-m02) Waiting for SSH to be available...
	I0103 19:22:26.392341   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.392743   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:26.392778   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.392879   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Using SSH client type: external
	I0103 19:22:26.392906   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa (-rw-------)
	I0103 19:22:26.392938   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 19:22:26.392955   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | About to run SSH command:
	I0103 19:22:26.392969   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | exit 0
	I0103 19:22:26.482232   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | SSH cmd err, output: <nil>: 
	I0103 19:22:26.482541   30211 main.go:141] libmachine: (multinode-484895-m02) KVM machine creation complete!
	I0103 19:22:26.482797   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetConfigRaw
	I0103 19:22:26.483416   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:22:26.483620   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:22:26.483747   30211 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0103 19:22:26.483764   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetState
	I0103 19:22:26.484986   30211 main.go:141] libmachine: Detecting operating system of created instance...
	I0103 19:22:26.485000   30211 main.go:141] libmachine: Waiting for SSH to be available...
	I0103 19:22:26.485021   30211 main.go:141] libmachine: Getting to WaitForSSH function...
	I0103 19:22:26.485034   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:26.487417   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.487834   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:26.487865   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.487956   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:26.488115   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:26.488305   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:26.488480   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:26.488664   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:22:26.489179   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:22:26.489195   30211 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0103 19:22:26.601826   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:22:26.601860   30211 main.go:141] libmachine: Detecting the provisioner...
	I0103 19:22:26.601872   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:26.604818   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.605195   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:26.605219   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.605374   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:26.605575   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:26.605756   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:26.605922   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:26.606081   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:22:26.606399   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:22:26.606411   30211 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0103 19:22:26.723157   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0103 19:22:26.723210   30211 main.go:141] libmachine: found compatible host: buildroot
	I0103 19:22:26.723219   30211 main.go:141] libmachine: Provisioning with buildroot...
	I0103 19:22:26.723233   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetMachineName
	I0103 19:22:26.723510   30211 buildroot.go:166] provisioning hostname "multinode-484895-m02"
	I0103 19:22:26.723572   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetMachineName
	I0103 19:22:26.723771   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:26.726448   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.726860   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:26.726896   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.727038   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:26.727232   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:26.727431   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:26.727578   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:26.727803   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:22:26.728104   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:22:26.728119   30211 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484895-m02 && echo "multinode-484895-m02" | sudo tee /etc/hostname
	I0103 19:22:26.854228   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484895-m02
	
	I0103 19:22:26.854261   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:26.856980   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.857385   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:26.857424   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.857636   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:26.857803   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:26.857923   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:26.858038   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:26.858243   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:22:26.858668   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:22:26.858687   30211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484895-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484895-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484895-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:22:26.978854   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:22:26.978886   30211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 19:22:26.978907   30211 buildroot.go:174] setting up certificates
	I0103 19:22:26.978918   30211 provision.go:83] configureAuth start
	I0103 19:22:26.978930   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetMachineName
	I0103 19:22:26.979193   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetIP
	I0103 19:22:26.981903   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.982334   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:26.982357   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.982477   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:26.985261   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.985681   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:26.985728   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:26.985841   30211 provision.go:138] copyHostCerts
	I0103 19:22:26.985871   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:22:26.985925   30211 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 19:22:26.985939   30211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:22:26.986040   30211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 19:22:26.986140   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:22:26.986165   30211 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 19:22:26.986174   30211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:22:26.986212   30211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 19:22:26.986351   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:22:26.986387   30211 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 19:22:26.986398   30211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:22:26.986448   30211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 19:22:26.986545   30211 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.multinode-484895-m02 san=[192.168.39.86 192.168.39.86 localhost 127.0.0.1 minikube multinode-484895-m02]
	I0103 19:22:27.039552   30211 provision.go:172] copyRemoteCerts
	I0103 19:22:27.039612   30211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:22:27.039639   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:27.042257   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.042699   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:27.042731   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.042911   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:27.043115   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:27.043285   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:27.043466   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa Username:docker}
	I0103 19:22:27.132112   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 19:22:27.132182   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0103 19:22:27.159823   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 19:22:27.159914   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 19:22:27.180643   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 19:22:27.180717   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:22:27.201739   30211 provision.go:86] duration metric: configureAuth took 222.807914ms
	I0103 19:22:27.201763   30211 buildroot.go:189] setting minikube options for container-runtime
	I0103 19:22:27.201972   30211 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:22:27.202067   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:27.204601   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.204998   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:27.205027   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.205175   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:27.205351   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:27.205534   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:27.205693   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:27.205884   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:22:27.206194   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:22:27.206210   30211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:22:27.493341   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:22:27.493369   30211 main.go:141] libmachine: Checking connection to Docker...
	I0103 19:22:27.493380   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetURL
	I0103 19:22:27.494787   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | Using libvirt version 6000000
	I0103 19:22:27.496857   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.497313   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:27.497346   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.497466   30211 main.go:141] libmachine: Docker is up and running!
	I0103 19:22:27.497494   30211 main.go:141] libmachine: Reticulating splines...
	I0103 19:22:27.497503   30211 client.go:171] LocalClient.Create took 22.219553958s
	I0103 19:22:27.497535   30211 start.go:167] duration metric: libmachine.API.Create for "multinode-484895" took 22.219630336s
	I0103 19:22:27.497550   30211 start.go:300] post-start starting for "multinode-484895-m02" (driver="kvm2")
	I0103 19:22:27.497562   30211 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:22:27.497588   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:22:27.497840   30211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:22:27.497874   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:27.500058   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.500404   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:27.500433   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.500576   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:27.500746   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:27.500894   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:27.501022   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa Username:docker}
	I0103 19:22:27.587898   30211 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:22:27.591812   30211 command_runner.go:130] > NAME=Buildroot
	I0103 19:22:27.591839   30211 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0103 19:22:27.591843   30211 command_runner.go:130] > ID=buildroot
	I0103 19:22:27.591849   30211 command_runner.go:130] > VERSION_ID=2021.02.12
	I0103 19:22:27.591854   30211 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0103 19:22:27.592002   30211 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 19:22:27.592020   30211 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 19:22:27.592076   30211 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 19:22:27.592139   30211 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 19:22:27.592148   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0103 19:22:27.592236   30211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:22:27.600447   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:22:27.623073   30211 start.go:303] post-start completed in 125.510967ms
	I0103 19:22:27.623123   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetConfigRaw
	I0103 19:22:27.623692   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetIP
	I0103 19:22:27.626199   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.626539   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:27.626570   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.626787   30211 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:22:27.627015   30211 start.go:128] duration metric: createHost completed in 22.367859468s
	I0103 19:22:27.627043   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:27.629290   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.629654   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:27.629675   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.629847   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:27.630013   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:27.630156   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:27.630295   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:27.630457   30211 main.go:141] libmachine: Using SSH client type: native
	I0103 19:22:27.630799   30211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:22:27.630811   30211 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 19:22:27.743412   30211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704309747.713279451
	
	I0103 19:22:27.743436   30211 fix.go:206] guest clock: 1704309747.713279451
	I0103 19:22:27.743443   30211 fix.go:219] Guest: 2024-01-03 19:22:27.713279451 +0000 UTC Remote: 2024-01-03 19:22:27.627029687 +0000 UTC m=+87.439529219 (delta=86.249764ms)
	I0103 19:22:27.743457   30211 fix.go:190] guest clock delta is within tolerance: 86.249764ms
	I0103 19:22:27.743462   30211 start.go:83] releasing machines lock for "multinode-484895-m02", held for 22.48439479s
	I0103 19:22:27.743478   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:22:27.743785   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetIP
	I0103 19:22:27.746727   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.747041   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:27.747066   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.749617   30211 out.go:177] * Found network options:
	I0103 19:22:27.751071   30211 out.go:177]   - NO_PROXY=192.168.39.191
	W0103 19:22:27.752390   30211 proxy.go:119] fail to check proxy env: Error ip not in block
	I0103 19:22:27.752431   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:22:27.753007   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:22:27.753186   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:22:27.753277   30211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:22:27.753308   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	W0103 19:22:27.753393   30211 proxy.go:119] fail to check proxy env: Error ip not in block
	I0103 19:22:27.753459   30211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:22:27.753494   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:22:27.755932   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.756198   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.756267   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:27.756293   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.756473   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:27.756590   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:27.756627   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:27.756651   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:27.756751   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:22:27.756841   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:27.756913   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:22:27.757029   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:22:27.757027   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa Username:docker}
	I0103 19:22:27.757164   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa Username:docker}
	I0103 19:22:27.992715   30211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:22:27.992718   30211 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0103 19:22:27.999138   30211 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0103 19:22:27.999502   30211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 19:22:27.999561   30211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:22:28.014875   30211 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0103 19:22:28.014933   30211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 19:22:28.014941   30211 start.go:475] detecting cgroup driver to use...
	I0103 19:22:28.015006   30211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:22:28.030564   30211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:22:28.042750   30211 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:22:28.042802   30211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:22:28.054768   30211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:22:28.066592   30211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:22:28.180623   30211 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0103 19:22:28.180701   30211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:22:28.308760   30211 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0103 19:22:28.308799   30211 docker.go:219] disabling docker service ...
	I0103 19:22:28.308844   30211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:22:28.322159   30211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:22:28.333836   30211 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0103 19:22:28.333928   30211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:22:28.461871   30211 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0103 19:22:28.461963   30211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:22:28.574232   30211 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0103 19:22:28.574264   30211 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0103 19:22:28.574325   30211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:22:28.587595   30211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:22:28.606093   30211 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0103 19:22:28.606150   30211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 19:22:28.606202   30211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:22:28.617298   30211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:22:28.617375   30211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:22:28.628539   30211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:22:28.638579   30211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:22:28.649458   30211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:22:28.661123   30211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:22:28.671288   30211 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 19:22:28.671343   30211 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 19:22:28.671404   30211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 19:22:28.686603   30211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:22:28.696710   30211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:22:28.806710   30211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:22:28.961224   30211 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:22:28.961291   30211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:22:28.965718   30211 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0103 19:22:28.965745   30211 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0103 19:22:28.965756   30211 command_runner.go:130] > Device: 16h/22d	Inode: 744         Links: 1
	I0103 19:22:28.965766   30211 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:22:28.965783   30211 command_runner.go:130] > Access: 2024-01-03 19:22:28.916797972 +0000
	I0103 19:22:28.965792   30211 command_runner.go:130] > Modify: 2024-01-03 19:22:28.916797972 +0000
	I0103 19:22:28.965797   30211 command_runner.go:130] > Change: 2024-01-03 19:22:28.916797972 +0000
	I0103 19:22:28.965801   30211 command_runner.go:130] >  Birth: -
	I0103 19:22:28.965902   30211 start.go:543] Will wait 60s for crictl version
	I0103 19:22:28.965959   30211 ssh_runner.go:195] Run: which crictl
	I0103 19:22:28.969965   30211 command_runner.go:130] > /usr/bin/crictl
	I0103 19:22:28.970037   30211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:22:29.004428   30211 command_runner.go:130] > Version:  0.1.0
	I0103 19:22:29.004453   30211 command_runner.go:130] > RuntimeName:  cri-o
	I0103 19:22:29.004458   30211 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0103 19:22:29.004463   30211 command_runner.go:130] > RuntimeApiVersion:  v1
	I0103 19:22:29.004483   30211 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 19:22:29.004536   30211 ssh_runner.go:195] Run: crio --version
	I0103 19:22:29.054561   30211 command_runner.go:130] > crio version 1.24.1
	I0103 19:22:29.054582   30211 command_runner.go:130] > Version:          1.24.1
	I0103 19:22:29.054589   30211 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:22:29.054593   30211 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:22:29.054599   30211 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:22:29.054608   30211 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:22:29.054612   30211 command_runner.go:130] > Compiler:         gc
	I0103 19:22:29.054617   30211 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:22:29.054623   30211 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:22:29.054629   30211 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:22:29.054634   30211 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:22:29.054638   30211 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:22:29.055964   30211 ssh_runner.go:195] Run: crio --version
	I0103 19:22:29.104072   30211 command_runner.go:130] > crio version 1.24.1
	I0103 19:22:29.104102   30211 command_runner.go:130] > Version:          1.24.1
	I0103 19:22:29.104113   30211 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:22:29.104120   30211 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:22:29.104128   30211 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:22:29.104136   30211 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:22:29.104143   30211 command_runner.go:130] > Compiler:         gc
	I0103 19:22:29.104151   30211 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:22:29.104160   30211 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:22:29.104172   30211 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:22:29.104181   30211 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:22:29.104188   30211 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:22:29.107298   30211 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 19:22:29.108760   30211 out.go:177]   - env NO_PROXY=192.168.39.191
	I0103 19:22:29.110246   30211 main.go:141] libmachine: (multinode-484895-m02) Calling .GetIP
	I0103 19:22:29.113000   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:29.113373   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:22:29.113403   30211 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:22:29.113646   30211 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 19:22:29.117584   30211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:22:29.128654   30211 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895 for IP: 192.168.39.86
	I0103 19:22:29.128693   30211 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:22:29.128833   30211 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 19:22:29.128871   30211 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 19:22:29.128884   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 19:22:29.128896   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 19:22:29.128906   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 19:22:29.128918   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 19:22:29.128965   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 19:22:29.128994   30211 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 19:22:29.129013   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:22:29.129038   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:22:29.129073   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:22:29.129096   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 19:22:29.129133   30211 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:22:29.129157   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0103 19:22:29.129169   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:22:29.129182   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0103 19:22:29.129519   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:22:29.150909   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 19:22:29.173232   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:22:29.194913   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 19:22:29.217140   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 19:22:29.238172   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:22:29.259066   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 19:22:29.280706   30211 ssh_runner.go:195] Run: openssl version
	I0103 19:22:29.285672   30211 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0103 19:22:29.285887   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:22:29.295704   30211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:22:29.299925   30211 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:22:29.300034   30211 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:22:29.300092   30211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:22:29.304956   30211 command_runner.go:130] > b5213941
	I0103 19:22:29.305030   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:22:29.314776   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 19:22:29.324399   30211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 19:22:29.328796   30211 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:22:29.328829   30211 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:22:29.328871   30211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 19:22:29.333827   30211 command_runner.go:130] > 51391683
	I0103 19:22:29.334075   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 19:22:29.344180   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 19:22:29.354354   30211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 19:22:29.358472   30211 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:22:29.358724   30211 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:22:29.358786   30211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 19:22:29.364385   30211 command_runner.go:130] > 3ec20f2e
	I0103 19:22:29.364621   30211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:22:29.376326   30211 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:22:29.380226   30211 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:22:29.380349   30211 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:22:29.380449   30211 ssh_runner.go:195] Run: crio config
	I0103 19:22:29.430194   30211 command_runner.go:130] ! time="2024-01-03 19:22:29.403147573Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0103 19:22:29.430290   30211 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0103 19:22:29.441670   30211 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0103 19:22:29.441696   30211 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0103 19:22:29.441703   30211 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0103 19:22:29.441708   30211 command_runner.go:130] > #
	I0103 19:22:29.441714   30211 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0103 19:22:29.441721   30211 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0103 19:22:29.441730   30211 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0103 19:22:29.441738   30211 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0103 19:22:29.441744   30211 command_runner.go:130] > # reload'.
	I0103 19:22:29.441750   30211 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0103 19:22:29.441759   30211 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0103 19:22:29.441765   30211 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0103 19:22:29.441773   30211 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0103 19:22:29.441777   30211 command_runner.go:130] > [crio]
	I0103 19:22:29.441783   30211 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0103 19:22:29.441790   30211 command_runner.go:130] > # containers images, in this directory.
	I0103 19:22:29.441795   30211 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0103 19:22:29.441804   30211 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0103 19:22:29.441811   30211 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0103 19:22:29.441818   30211 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0103 19:22:29.441827   30211 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0103 19:22:29.441831   30211 command_runner.go:130] > storage_driver = "overlay"
	I0103 19:22:29.441837   30211 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0103 19:22:29.441843   30211 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0103 19:22:29.441847   30211 command_runner.go:130] > storage_option = [
	I0103 19:22:29.441858   30211 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0103 19:22:29.441866   30211 command_runner.go:130] > ]
	I0103 19:22:29.441877   30211 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0103 19:22:29.441891   30211 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0103 19:22:29.441899   30211 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0103 19:22:29.441905   30211 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0103 19:22:29.441913   30211 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0103 19:22:29.441917   30211 command_runner.go:130] > # always happen on a node reboot
	I0103 19:22:29.441923   30211 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0103 19:22:29.441928   30211 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0103 19:22:29.441937   30211 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0103 19:22:29.441944   30211 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0103 19:22:29.441952   30211 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0103 19:22:29.441959   30211 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0103 19:22:29.441969   30211 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0103 19:22:29.441975   30211 command_runner.go:130] > # internal_wipe = true
	I0103 19:22:29.441981   30211 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0103 19:22:29.441989   30211 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0103 19:22:29.441995   30211 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0103 19:22:29.442002   30211 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0103 19:22:29.442008   30211 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0103 19:22:29.442014   30211 command_runner.go:130] > [crio.api]
	I0103 19:22:29.442020   30211 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0103 19:22:29.442026   30211 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0103 19:22:29.442032   30211 command_runner.go:130] > # IP address on which the stream server will listen.
	I0103 19:22:29.442038   30211 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0103 19:22:29.442045   30211 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0103 19:22:29.442052   30211 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0103 19:22:29.442059   30211 command_runner.go:130] > # stream_port = "0"
	I0103 19:22:29.442064   30211 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0103 19:22:29.442071   30211 command_runner.go:130] > # stream_enable_tls = false
	I0103 19:22:29.442077   30211 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0103 19:22:29.442083   30211 command_runner.go:130] > # stream_idle_timeout = ""
	I0103 19:22:29.442090   30211 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0103 19:22:29.442098   30211 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0103 19:22:29.442104   30211 command_runner.go:130] > # minutes.
	I0103 19:22:29.442108   30211 command_runner.go:130] > # stream_tls_cert = ""
	I0103 19:22:29.442116   30211 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0103 19:22:29.442124   30211 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0103 19:22:29.442135   30211 command_runner.go:130] > # stream_tls_key = ""
	I0103 19:22:29.442141   30211 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0103 19:22:29.442149   30211 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0103 19:22:29.442156   30211 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0103 19:22:29.442161   30211 command_runner.go:130] > # stream_tls_ca = ""
	I0103 19:22:29.442170   30211 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:22:29.442175   30211 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0103 19:22:29.442186   30211 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:22:29.442192   30211 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0103 19:22:29.442224   30211 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0103 19:22:29.442235   30211 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0103 19:22:29.442239   30211 command_runner.go:130] > [crio.runtime]
	I0103 19:22:29.442245   30211 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0103 19:22:29.442253   30211 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0103 19:22:29.442257   30211 command_runner.go:130] > # "nofile=1024:2048"
	I0103 19:22:29.442264   30211 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0103 19:22:29.442270   30211 command_runner.go:130] > # default_ulimits = [
	I0103 19:22:29.442274   30211 command_runner.go:130] > # ]
	I0103 19:22:29.442282   30211 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0103 19:22:29.442288   30211 command_runner.go:130] > # no_pivot = false
	I0103 19:22:29.442294   30211 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0103 19:22:29.442302   30211 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0103 19:22:29.442309   30211 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0103 19:22:29.442316   30211 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0103 19:22:29.442323   30211 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0103 19:22:29.442329   30211 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:22:29.442336   30211 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0103 19:22:29.442340   30211 command_runner.go:130] > # Cgroup setting for conmon
	I0103 19:22:29.442347   30211 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0103 19:22:29.442353   30211 command_runner.go:130] > conmon_cgroup = "pod"
	I0103 19:22:29.442359   30211 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0103 19:22:29.442366   30211 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0103 19:22:29.442375   30211 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:22:29.442381   30211 command_runner.go:130] > conmon_env = [
	I0103 19:22:29.442387   30211 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0103 19:22:29.442393   30211 command_runner.go:130] > ]
	I0103 19:22:29.442398   30211 command_runner.go:130] > # Additional environment variables to set for all the
	I0103 19:22:29.442406   30211 command_runner.go:130] > # containers. These are overridden if set in the
	I0103 19:22:29.442414   30211 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0103 19:22:29.442418   30211 command_runner.go:130] > # default_env = [
	I0103 19:22:29.442423   30211 command_runner.go:130] > # ]
	I0103 19:22:29.442429   30211 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0103 19:22:29.442435   30211 command_runner.go:130] > # selinux = false
	I0103 19:22:29.442441   30211 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0103 19:22:29.442449   30211 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0103 19:22:29.442457   30211 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0103 19:22:29.442462   30211 command_runner.go:130] > # seccomp_profile = ""
	I0103 19:22:29.442469   30211 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0103 19:22:29.442477   30211 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0103 19:22:29.442484   30211 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0103 19:22:29.442491   30211 command_runner.go:130] > # which might increase security.
	I0103 19:22:29.442496   30211 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0103 19:22:29.442504   30211 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0103 19:22:29.442512   30211 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0103 19:22:29.442531   30211 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0103 19:22:29.442538   30211 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0103 19:22:29.442543   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:22:29.442550   30211 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0103 19:22:29.442556   30211 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0103 19:22:29.442562   30211 command_runner.go:130] > # the cgroup blockio controller.
	I0103 19:22:29.442567   30211 command_runner.go:130] > # blockio_config_file = ""
	I0103 19:22:29.442575   30211 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0103 19:22:29.442581   30211 command_runner.go:130] > # irqbalance daemon.
	I0103 19:22:29.442587   30211 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0103 19:22:29.442595   30211 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0103 19:22:29.442603   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:22:29.442609   30211 command_runner.go:130] > # rdt_config_file = ""
	I0103 19:22:29.442615   30211 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0103 19:22:29.442621   30211 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0103 19:22:29.442627   30211 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0103 19:22:29.442632   30211 command_runner.go:130] > # separate_pull_cgroup = ""
	I0103 19:22:29.442642   30211 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0103 19:22:29.442650   30211 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0103 19:22:29.442657   30211 command_runner.go:130] > # will be added.
	I0103 19:22:29.442661   30211 command_runner.go:130] > # default_capabilities = [
	I0103 19:22:29.442667   30211 command_runner.go:130] > # 	"CHOWN",
	I0103 19:22:29.442672   30211 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0103 19:22:29.442677   30211 command_runner.go:130] > # 	"FSETID",
	I0103 19:22:29.442681   30211 command_runner.go:130] > # 	"FOWNER",
	I0103 19:22:29.442688   30211 command_runner.go:130] > # 	"SETGID",
	I0103 19:22:29.442692   30211 command_runner.go:130] > # 	"SETUID",
	I0103 19:22:29.442698   30211 command_runner.go:130] > # 	"SETPCAP",
	I0103 19:22:29.442702   30211 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0103 19:22:29.442710   30211 command_runner.go:130] > # 	"KILL",
	I0103 19:22:29.442714   30211 command_runner.go:130] > # ]
	I0103 19:22:29.442722   30211 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0103 19:22:29.442730   30211 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:22:29.442737   30211 command_runner.go:130] > # default_sysctls = [
	I0103 19:22:29.442743   30211 command_runner.go:130] > # ]
	I0103 19:22:29.442748   30211 command_runner.go:130] > # List of devices on the host that a
	I0103 19:22:29.442756   30211 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0103 19:22:29.442762   30211 command_runner.go:130] > # allowed_devices = [
	I0103 19:22:29.442766   30211 command_runner.go:130] > # 	"/dev/fuse",
	I0103 19:22:29.442772   30211 command_runner.go:130] > # ]
	I0103 19:22:29.442777   30211 command_runner.go:130] > # List of additional devices. specified as
	I0103 19:22:29.442786   30211 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0103 19:22:29.442794   30211 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0103 19:22:29.442807   30211 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:22:29.442814   30211 command_runner.go:130] > # additional_devices = [
	I0103 19:22:29.442817   30211 command_runner.go:130] > # ]
	I0103 19:22:29.442825   30211 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0103 19:22:29.442830   30211 command_runner.go:130] > # cdi_spec_dirs = [
	I0103 19:22:29.442836   30211 command_runner.go:130] > # 	"/etc/cdi",
	I0103 19:22:29.442843   30211 command_runner.go:130] > # 	"/var/run/cdi",
	I0103 19:22:29.442846   30211 command_runner.go:130] > # ]
	I0103 19:22:29.442855   30211 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0103 19:22:29.442862   30211 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0103 19:22:29.442869   30211 command_runner.go:130] > # Defaults to false.
	I0103 19:22:29.442873   30211 command_runner.go:130] > # device_ownership_from_security_context = false
	I0103 19:22:29.442882   30211 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0103 19:22:29.442888   30211 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0103 19:22:29.442894   30211 command_runner.go:130] > # hooks_dir = [
	I0103 19:22:29.442899   30211 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0103 19:22:29.442905   30211 command_runner.go:130] > # ]
	I0103 19:22:29.442911   30211 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0103 19:22:29.442920   30211 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0103 19:22:29.442925   30211 command_runner.go:130] > # its default mounts from the following two files:
	I0103 19:22:29.442930   30211 command_runner.go:130] > #
	I0103 19:22:29.442936   30211 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0103 19:22:29.442945   30211 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0103 19:22:29.442953   30211 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0103 19:22:29.442959   30211 command_runner.go:130] > #
	I0103 19:22:29.442966   30211 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0103 19:22:29.442974   30211 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0103 19:22:29.442981   30211 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0103 19:22:29.442987   30211 command_runner.go:130] > #      only add mounts it finds in this file.
	I0103 19:22:29.442991   30211 command_runner.go:130] > #
	I0103 19:22:29.442996   30211 command_runner.go:130] > # default_mounts_file = ""
	I0103 19:22:29.443002   30211 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0103 19:22:29.443010   30211 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0103 19:22:29.443016   30211 command_runner.go:130] > pids_limit = 1024
	I0103 19:22:29.443023   30211 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0103 19:22:29.443031   30211 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0103 19:22:29.443039   30211 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0103 19:22:29.443049   30211 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0103 19:22:29.443055   30211 command_runner.go:130] > # log_size_max = -1
	I0103 19:22:29.443062   30211 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0103 19:22:29.443068   30211 command_runner.go:130] > # log_to_journald = false
	I0103 19:22:29.443074   30211 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0103 19:22:29.443081   30211 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0103 19:22:29.443087   30211 command_runner.go:130] > # Path to directory for container attach sockets.
	I0103 19:22:29.443093   30211 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0103 19:22:29.443100   30211 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0103 19:22:29.443107   30211 command_runner.go:130] > # bind_mount_prefix = ""
	I0103 19:22:29.443112   30211 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0103 19:22:29.443118   30211 command_runner.go:130] > # read_only = false
	I0103 19:22:29.443124   30211 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0103 19:22:29.443132   30211 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0103 19:22:29.443137   30211 command_runner.go:130] > # live configuration reload.
	I0103 19:22:29.443143   30211 command_runner.go:130] > # log_level = "info"
	I0103 19:22:29.443148   30211 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0103 19:22:29.443154   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:22:29.443160   30211 command_runner.go:130] > # log_filter = ""
	I0103 19:22:29.443166   30211 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0103 19:22:29.443174   30211 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0103 19:22:29.443181   30211 command_runner.go:130] > # separated by comma.
	I0103 19:22:29.443185   30211 command_runner.go:130] > # uid_mappings = ""
	I0103 19:22:29.443193   30211 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0103 19:22:29.443200   30211 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0103 19:22:29.443207   30211 command_runner.go:130] > # separated by comma.
	I0103 19:22:29.443211   30211 command_runner.go:130] > # gid_mappings = ""
	I0103 19:22:29.443218   30211 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0103 19:22:29.443226   30211 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:22:29.443234   30211 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:22:29.443240   30211 command_runner.go:130] > # minimum_mappable_uid = -1
	I0103 19:22:29.443246   30211 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0103 19:22:29.443254   30211 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:22:29.443262   30211 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:22:29.443268   30211 command_runner.go:130] > # minimum_mappable_gid = -1
	I0103 19:22:29.443274   30211 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0103 19:22:29.443282   30211 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0103 19:22:29.443290   30211 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0103 19:22:29.443296   30211 command_runner.go:130] > # ctr_stop_timeout = 30
	I0103 19:22:29.443301   30211 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0103 19:22:29.443309   30211 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0103 19:22:29.443314   30211 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0103 19:22:29.443321   30211 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0103 19:22:29.443326   30211 command_runner.go:130] > drop_infra_ctr = false
	I0103 19:22:29.443334   30211 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0103 19:22:29.443341   30211 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0103 19:22:29.443349   30211 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0103 19:22:29.443355   30211 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0103 19:22:29.443361   30211 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0103 19:22:29.443368   30211 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0103 19:22:29.443372   30211 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0103 19:22:29.443382   30211 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0103 19:22:29.443388   30211 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0103 19:22:29.443394   30211 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0103 19:22:29.443402   30211 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0103 19:22:29.443410   30211 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0103 19:22:29.443415   30211 command_runner.go:130] > # default_runtime = "runc"
	I0103 19:22:29.443423   30211 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0103 19:22:29.443433   30211 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0103 19:22:29.443444   30211 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0103 19:22:29.443451   30211 command_runner.go:130] > # creation as a file is not desired either.
	I0103 19:22:29.443458   30211 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0103 19:22:29.443465   30211 command_runner.go:130] > # the hostname is being managed dynamically.
	I0103 19:22:29.443470   30211 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0103 19:22:29.443475   30211 command_runner.go:130] > # ]
	I0103 19:22:29.443481   30211 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0103 19:22:29.443490   30211 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0103 19:22:29.443498   30211 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0103 19:22:29.443506   30211 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0103 19:22:29.443510   30211 command_runner.go:130] > #
	I0103 19:22:29.443515   30211 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0103 19:22:29.443522   30211 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0103 19:22:29.443526   30211 command_runner.go:130] > #  runtime_type = "oci"
	I0103 19:22:29.443533   30211 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0103 19:22:29.443538   30211 command_runner.go:130] > #  privileged_without_host_devices = false
	I0103 19:22:29.443544   30211 command_runner.go:130] > #  allowed_annotations = []
	I0103 19:22:29.443548   30211 command_runner.go:130] > # Where:
	I0103 19:22:29.443553   30211 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0103 19:22:29.443561   30211 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0103 19:22:29.443570   30211 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0103 19:22:29.443578   30211 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0103 19:22:29.443584   30211 command_runner.go:130] > #   in $PATH.
	I0103 19:22:29.443590   30211 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0103 19:22:29.443597   30211 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0103 19:22:29.443603   30211 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0103 19:22:29.443609   30211 command_runner.go:130] > #   state.
	I0103 19:22:29.443615   30211 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0103 19:22:29.443623   30211 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0103 19:22:29.443629   30211 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0103 19:22:29.443635   30211 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0103 19:22:29.443646   30211 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0103 19:22:29.443655   30211 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0103 19:22:29.443662   30211 command_runner.go:130] > #   The currently recognized values are:
	I0103 19:22:29.443669   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0103 19:22:29.443678   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0103 19:22:29.443686   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0103 19:22:29.443694   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0103 19:22:29.443702   30211 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0103 19:22:29.443710   30211 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0103 19:22:29.443718   30211 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0103 19:22:29.443727   30211 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0103 19:22:29.443734   30211 command_runner.go:130] > #   should be moved to the container's cgroup
	I0103 19:22:29.443739   30211 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0103 19:22:29.443745   30211 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0103 19:22:29.443749   30211 command_runner.go:130] > runtime_type = "oci"
	I0103 19:22:29.443754   30211 command_runner.go:130] > runtime_root = "/run/runc"
	I0103 19:22:29.443761   30211 command_runner.go:130] > runtime_config_path = ""
	I0103 19:22:29.443765   30211 command_runner.go:130] > monitor_path = ""
	I0103 19:22:29.443771   30211 command_runner.go:130] > monitor_cgroup = ""
	I0103 19:22:29.443775   30211 command_runner.go:130] > monitor_exec_cgroup = ""
	I0103 19:22:29.443783   30211 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0103 19:22:29.443789   30211 command_runner.go:130] > # running containers
	I0103 19:22:29.443794   30211 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0103 19:22:29.443800   30211 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0103 19:22:29.443824   30211 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0103 19:22:29.443831   30211 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0103 19:22:29.443839   30211 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0103 19:22:29.443843   30211 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0103 19:22:29.443850   30211 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0103 19:22:29.443855   30211 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0103 19:22:29.443862   30211 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0103 19:22:29.443866   30211 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0103 19:22:29.443875   30211 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0103 19:22:29.443880   30211 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0103 19:22:29.443888   30211 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0103 19:22:29.443895   30211 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0103 19:22:29.443903   30211 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0103 19:22:29.443910   30211 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0103 19:22:29.443920   30211 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0103 19:22:29.443930   30211 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0103 19:22:29.443938   30211 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0103 19:22:29.443945   30211 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0103 19:22:29.443951   30211 command_runner.go:130] > # Example:
	I0103 19:22:29.443956   30211 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0103 19:22:29.443963   30211 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0103 19:22:29.443968   30211 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0103 19:22:29.443975   30211 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0103 19:22:29.443979   30211 command_runner.go:130] > # cpuset = 0
	I0103 19:22:29.443984   30211 command_runner.go:130] > # cpushares = "0-1"
	I0103 19:22:29.443987   30211 command_runner.go:130] > # Where:
	I0103 19:22:29.443994   30211 command_runner.go:130] > # The workload name is workload-type.
	I0103 19:22:29.444001   30211 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0103 19:22:29.444010   30211 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0103 19:22:29.444017   30211 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0103 19:22:29.444027   30211 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0103 19:22:29.444035   30211 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0103 19:22:29.444041   30211 command_runner.go:130] > # 
	I0103 19:22:29.444048   30211 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0103 19:22:29.444053   30211 command_runner.go:130] > #
	I0103 19:22:29.444058   30211 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0103 19:22:29.444066   30211 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0103 19:22:29.444075   30211 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0103 19:22:29.444082   30211 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0103 19:22:29.444090   30211 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0103 19:22:29.444096   30211 command_runner.go:130] > [crio.image]
	I0103 19:22:29.444102   30211 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0103 19:22:29.444108   30211 command_runner.go:130] > # default_transport = "docker://"
	I0103 19:22:29.444114   30211 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0103 19:22:29.444123   30211 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:22:29.444130   30211 command_runner.go:130] > # global_auth_file = ""
	I0103 19:22:29.444135   30211 command_runner.go:130] > # The image used to instantiate infra containers.
	I0103 19:22:29.444142   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:22:29.444147   30211 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0103 19:22:29.444155   30211 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0103 19:22:29.444163   30211 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:22:29.444171   30211 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:22:29.444176   30211 command_runner.go:130] > # pause_image_auth_file = ""
	I0103 19:22:29.444182   30211 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0103 19:22:29.444190   30211 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0103 19:22:29.444197   30211 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0103 19:22:29.444205   30211 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0103 19:22:29.444211   30211 command_runner.go:130] > # pause_command = "/pause"
	I0103 19:22:29.444217   30211 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0103 19:22:29.444226   30211 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0103 19:22:29.444232   30211 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0103 19:22:29.444239   30211 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0103 19:22:29.444247   30211 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0103 19:22:29.444252   30211 command_runner.go:130] > # signature_policy = ""
	I0103 19:22:29.444258   30211 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0103 19:22:29.444266   30211 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0103 19:22:29.444272   30211 command_runner.go:130] > # changing them here.
	I0103 19:22:29.444276   30211 command_runner.go:130] > # insecure_registries = [
	I0103 19:22:29.444282   30211 command_runner.go:130] > # ]
	I0103 19:22:29.444289   30211 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0103 19:22:29.444296   30211 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0103 19:22:29.444300   30211 command_runner.go:130] > # image_volumes = "mkdir"
	I0103 19:22:29.444308   30211 command_runner.go:130] > # Temporary directory to use for storing big files
	I0103 19:22:29.444312   30211 command_runner.go:130] > # big_files_temporary_dir = ""
	I0103 19:22:29.444318   30211 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0103 19:22:29.444325   30211 command_runner.go:130] > # CNI plugins.
	I0103 19:22:29.444329   30211 command_runner.go:130] > [crio.network]
	I0103 19:22:29.444337   30211 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0103 19:22:29.444345   30211 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0103 19:22:29.444349   30211 command_runner.go:130] > # cni_default_network = ""
	I0103 19:22:29.444357   30211 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0103 19:22:29.444364   30211 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0103 19:22:29.444369   30211 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0103 19:22:29.444375   30211 command_runner.go:130] > # plugin_dirs = [
	I0103 19:22:29.444379   30211 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0103 19:22:29.444384   30211 command_runner.go:130] > # ]
	I0103 19:22:29.444391   30211 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0103 19:22:29.444397   30211 command_runner.go:130] > [crio.metrics]
	I0103 19:22:29.444402   30211 command_runner.go:130] > # Globally enable or disable metrics support.
	I0103 19:22:29.444409   30211 command_runner.go:130] > enable_metrics = true
	I0103 19:22:29.444414   30211 command_runner.go:130] > # Specify enabled metrics collectors.
	I0103 19:22:29.444420   30211 command_runner.go:130] > # Per default all metrics are enabled.
	I0103 19:22:29.444426   30211 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0103 19:22:29.444435   30211 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0103 19:22:29.444442   30211 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0103 19:22:29.444447   30211 command_runner.go:130] > # metrics_collectors = [
	I0103 19:22:29.444451   30211 command_runner.go:130] > # 	"operations",
	I0103 19:22:29.444458   30211 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0103 19:22:29.444462   30211 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0103 19:22:29.444468   30211 command_runner.go:130] > # 	"operations_errors",
	I0103 19:22:29.444473   30211 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0103 19:22:29.444479   30211 command_runner.go:130] > # 	"image_pulls_by_name",
	I0103 19:22:29.444483   30211 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0103 19:22:29.444490   30211 command_runner.go:130] > # 	"image_pulls_failures",
	I0103 19:22:29.444494   30211 command_runner.go:130] > # 	"image_pulls_successes",
	I0103 19:22:29.444501   30211 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0103 19:22:29.444505   30211 command_runner.go:130] > # 	"image_layer_reuse",
	I0103 19:22:29.444512   30211 command_runner.go:130] > # 	"containers_oom_total",
	I0103 19:22:29.444516   30211 command_runner.go:130] > # 	"containers_oom",
	I0103 19:22:29.444522   30211 command_runner.go:130] > # 	"processes_defunct",
	I0103 19:22:29.444526   30211 command_runner.go:130] > # 	"operations_total",
	I0103 19:22:29.444532   30211 command_runner.go:130] > # 	"operations_latency_seconds",
	I0103 19:22:29.444537   30211 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0103 19:22:29.444544   30211 command_runner.go:130] > # 	"operations_errors_total",
	I0103 19:22:29.444548   30211 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0103 19:22:29.444554   30211 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0103 19:22:29.444559   30211 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0103 19:22:29.444565   30211 command_runner.go:130] > # 	"image_pulls_success_total",
	I0103 19:22:29.444569   30211 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0103 19:22:29.444576   30211 command_runner.go:130] > # 	"containers_oom_count_total",
	I0103 19:22:29.444579   30211 command_runner.go:130] > # ]
	I0103 19:22:29.444587   30211 command_runner.go:130] > # The port on which the metrics server will listen.
	I0103 19:22:29.444593   30211 command_runner.go:130] > # metrics_port = 9090
	I0103 19:22:29.444599   30211 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0103 19:22:29.444605   30211 command_runner.go:130] > # metrics_socket = ""
	I0103 19:22:29.444610   30211 command_runner.go:130] > # The certificate for the secure metrics server.
	I0103 19:22:29.444619   30211 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0103 19:22:29.444627   30211 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0103 19:22:29.444633   30211 command_runner.go:130] > # certificate on any modification event.
	I0103 19:22:29.444640   30211 command_runner.go:130] > # metrics_cert = ""
	I0103 19:22:29.444648   30211 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0103 19:22:29.444653   30211 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0103 19:22:29.444658   30211 command_runner.go:130] > # metrics_key = ""
	I0103 19:22:29.444664   30211 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0103 19:22:29.444671   30211 command_runner.go:130] > [crio.tracing]
	I0103 19:22:29.444677   30211 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0103 19:22:29.444683   30211 command_runner.go:130] > # enable_tracing = false
	I0103 19:22:29.444689   30211 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0103 19:22:29.444695   30211 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0103 19:22:29.444700   30211 command_runner.go:130] > # Number of samples to collect per million spans.
	I0103 19:22:29.444707   30211 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0103 19:22:29.444713   30211 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0103 19:22:29.444719   30211 command_runner.go:130] > [crio.stats]
	I0103 19:22:29.444725   30211 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0103 19:22:29.444731   30211 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0103 19:22:29.444737   30211 command_runner.go:130] > # stats_collection_period = 0
	I0103 19:22:29.444802   30211 cni.go:84] Creating CNI manager for ""
	I0103 19:22:29.444812   30211 cni.go:136] 2 nodes found, recommending kindnet
	I0103 19:22:29.444820   30211 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:22:29.444838   30211 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484895 NodeName:multinode-484895-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 19:22:29.444941   30211 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-484895-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:22:29.445006   30211 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-484895-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:22:29.445058   30211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 19:22:29.454546   30211 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0103 19:22:29.454587   30211 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0103 19:22:29.454631   30211 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0103 19:22:29.463852   30211 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0103 19:22:29.463876   30211 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0103 19:22:29.463912   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0103 19:22:29.463931   30211 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0103 19:22:29.463976   30211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0103 19:22:29.468220   30211 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0103 19:22:29.468265   30211 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0103 19:22:29.468288   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0103 19:22:30.459120   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0103 19:22:30.459212   30211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0103 19:22:30.463862   30211 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0103 19:22:30.464055   30211 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0103 19:22:30.464086   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0103 19:22:31.201575   30211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:22:31.215215   30211 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0103 19:22:31.215298   30211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0103 19:22:31.219628   30211 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0103 19:22:31.219729   30211 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0103 19:22:31.219765   30211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0103 19:22:31.688587   30211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0103 19:22:31.696147   30211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0103 19:22:31.710897   30211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 19:22:31.725386   30211 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I0103 19:22:31.728848   30211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:22:31.739899   30211 host.go:66] Checking if "multinode-484895" exists ...
	I0103 19:22:31.740156   30211 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:22:31.740309   30211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:22:31.740345   30211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:22:31.754245   30211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35489
	I0103 19:22:31.754681   30211 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:22:31.755171   30211 main.go:141] libmachine: Using API Version  1
	I0103 19:22:31.755198   30211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:22:31.755491   30211 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:22:31.755657   30211 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:22:31.755791   30211 start.go:304] JoinCluster: &{Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:22:31.755909   30211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0103 19:22:31.755929   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:22:31.758765   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:22:31.759265   30211 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:22:31.759296   30211 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:22:31.759390   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:22:31.759569   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:22:31.759715   30211 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:22:31.759874   30211 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:22:31.930964   30211 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4r26ik.vc22w0kmjizza091 --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 
	I0103 19:22:31.931018   30211 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:22:31.931042   30211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4r26ik.vc22w0kmjizza091 --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-484895-m02"
	I0103 19:22:31.974256   30211 command_runner.go:130] ! W0103 19:22:31.950807     822 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0103 19:22:32.091408   30211 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 19:22:34.778803   30211 command_runner.go:130] > [preflight] Running pre-flight checks
	I0103 19:22:34.778828   30211 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0103 19:22:34.778843   30211 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0103 19:22:34.778854   30211 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:22:34.778866   30211 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:22:34.778874   30211 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0103 19:22:34.778883   30211 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0103 19:22:34.778896   30211 command_runner.go:130] > This node has joined the cluster:
	I0103 19:22:34.778907   30211 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0103 19:22:34.778915   30211 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0103 19:22:34.778937   30211 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0103 19:22:34.778962   30211 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4r26ik.vc22w0kmjizza091 --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-484895-m02": (2.84790209s)
	I0103 19:22:34.778985   30211 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0103 19:22:34.938143   30211 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0103 19:22:35.049618   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=multinode-484895 minikube.k8s.io/updated_at=2024_01_03T19_22_35_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:22:35.151693   30211 command_runner.go:130] > node/multinode-484895-m02 labeled
	I0103 19:22:35.153322   30211 start.go:306] JoinCluster complete in 3.397523843s
	I0103 19:22:35.153346   30211 cni.go:84] Creating CNI manager for ""
	I0103 19:22:35.153352   30211 cni.go:136] 2 nodes found, recommending kindnet
	I0103 19:22:35.153402   30211 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 19:22:35.158612   30211 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0103 19:22:35.158645   30211 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0103 19:22:35.158655   30211 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0103 19:22:35.158664   30211 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:22:35.158673   30211 command_runner.go:130] > Access: 2024-01-03 19:21:12.720055225 +0000
	I0103 19:22:35.158681   30211 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0103 19:22:35.158694   30211 command_runner.go:130] > Change: 2024-01-03 19:21:11.081055225 +0000
	I0103 19:22:35.158703   30211 command_runner.go:130] >  Birth: -
	I0103 19:22:35.158808   30211 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 19:22:35.158827   30211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 19:22:35.180170   30211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 19:22:35.484721   30211 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:22:35.484757   30211 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:22:35.484766   30211 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0103 19:22:35.484773   30211 command_runner.go:130] > daemonset.apps/kindnet configured
	I0103 19:22:35.485208   30211 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:22:35.485478   30211 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:22:35.485853   30211 round_trippers.go:463] GET https://192.168.39.191:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:22:35.485867   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:35.485875   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:35.485880   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:35.488156   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:35.488178   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:35.488188   30211 round_trippers.go:580]     Content-Length: 291
	I0103 19:22:35.488196   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:35 GMT
	I0103 19:22:35.488205   30211 round_trippers.go:580]     Audit-Id: b33e56b6-faac-4d0e-b9c7-108e559052ba
	I0103 19:22:35.488216   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:35.488227   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:35.488235   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:35.488246   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:35.488285   30211 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e2317390-8a66-46be-8656-5adca86177ea","resourceVersion":"404","creationTimestamp":"2024-01-03T19:21:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0103 19:22:35.488385   30211 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484895" context rescaled to 1 replicas
	I0103 19:22:35.488420   30211 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:22:35.490674   30211 out.go:177] * Verifying Kubernetes components...
	I0103 19:22:35.492338   30211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:22:35.507747   30211 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:22:35.508009   30211 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:22:35.508373   30211 node_ready.go:35] waiting up to 6m0s for node "multinode-484895-m02" to be "Ready" ...
	I0103 19:22:35.508473   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:35.508484   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:35.508501   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:35.508510   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:35.511653   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:35.511675   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:35.511686   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:35.511694   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:35.511706   30211 round_trippers.go:580]     Content-Length: 4082
	I0103 19:22:35.511713   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:35 GMT
	I0103 19:22:35.511726   30211 round_trippers.go:580]     Audit-Id: 33c0861e-2184-41f0-9b27-26655c90f1f6
	I0103 19:22:35.511733   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:35.511748   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:35.511839   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"458","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0103 19:22:36.009389   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:36.009412   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:36.009421   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:36.009427   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:36.012752   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:36.012775   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:36.012783   30211 round_trippers.go:580]     Audit-Id: da0f1338-c119-4d02-adf1-ec5d439d6e8a
	I0103 19:22:36.012788   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:36.012794   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:36.012799   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:36.012805   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:36.012810   30211 round_trippers.go:580]     Content-Length: 4082
	I0103 19:22:36.012815   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:36 GMT
	I0103 19:22:36.012894   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"458","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0103 19:22:36.508605   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:36.508634   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:36.508642   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:36.508649   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:36.536285   30211 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0103 19:22:36.536315   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:36.536326   30211 round_trippers.go:580]     Audit-Id: 44b049bf-0adf-4f95-b450-ff943a151236
	I0103 19:22:36.536335   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:36.536343   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:36.536351   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:36.536358   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:36.536365   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:36 GMT
	I0103 19:22:36.536549   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:37.009480   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:37.009501   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:37.009509   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:37.009515   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:37.012072   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:37.012098   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:37.012108   30211 round_trippers.go:580]     Audit-Id: 52d1ca1e-305c-47a0-a4cb-2ee26f0cbf00
	I0103 19:22:37.012116   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:37.012124   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:37.012132   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:37.012140   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:37.012148   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:37 GMT
	I0103 19:22:37.012258   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:37.508829   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:37.508853   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:37.508861   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:37.508867   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:37.513066   30211 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:22:37.513091   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:37.513113   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:37.513120   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:37 GMT
	I0103 19:22:37.513128   30211 round_trippers.go:580]     Audit-Id: e9305031-680e-4dea-84b1-ee2921a2bf81
	I0103 19:22:37.513138   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:37.513146   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:37.513153   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:37.513328   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:37.513693   30211 node_ready.go:58] node "multinode-484895-m02" has status "Ready":"False"
	I0103 19:22:38.009611   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:38.009632   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:38.009640   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:38.009646   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:38.012529   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:38.012548   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:38.012555   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:38.012560   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:38.012565   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:38.012571   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:38 GMT
	I0103 19:22:38.012576   30211 round_trippers.go:580]     Audit-Id: 5c54dee5-c207-40c9-8703-6002afff474d
	I0103 19:22:38.012581   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:38.012735   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:38.509440   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:38.509479   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:38.509491   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:38.509501   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:38.512976   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:38.513005   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:38.513015   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:38 GMT
	I0103 19:22:38.513034   30211 round_trippers.go:580]     Audit-Id: 3845e2ae-a3b5-441e-b66c-443fc85956a9
	I0103 19:22:38.513045   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:38.513054   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:38.513067   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:38.513075   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:38.513401   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:39.009026   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:39.009065   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:39.009074   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:39.009079   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:39.011873   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:39.011899   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:39.011908   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:39.011916   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:39.011923   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:39.011931   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:39.011942   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:39 GMT
	I0103 19:22:39.011951   30211 round_trippers.go:580]     Audit-Id: 3b65dfb1-61c3-45bc-a84d-42c0fa673365
	I0103 19:22:39.012413   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:39.509091   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:39.509118   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:39.509126   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:39.509132   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:39.511825   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:39.511848   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:39.511857   30211 round_trippers.go:580]     Audit-Id: d80d5203-48bf-469f-9128-349b29d58aae
	I0103 19:22:39.511865   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:39.511872   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:39.511881   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:39.511888   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:39.511895   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:39 GMT
	I0103 19:22:39.512088   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:40.008720   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:40.008746   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:40.008768   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:40.008775   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:40.011478   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:40.011501   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:40.011511   30211 round_trippers.go:580]     Audit-Id: 31375399-4c18-46e4-a556-5225e1be57a1
	I0103 19:22:40.011520   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:40.011528   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:40.011535   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:40.011542   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:40.011549   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:40 GMT
	I0103 19:22:40.011718   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:40.012000   30211 node_ready.go:58] node "multinode-484895-m02" has status "Ready":"False"
	I0103 19:22:40.508889   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:40.508911   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:40.508919   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:40.508925   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:40.511442   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:40.511465   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:40.511474   30211 round_trippers.go:580]     Audit-Id: 8cb71a7f-4c0a-40e0-9f04-8a12bf5ded7d
	I0103 19:22:40.511482   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:40.511488   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:40.511497   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:40.511504   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:40.511515   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:40 GMT
	I0103 19:22:40.511702   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:41.009384   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:41.009410   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:41.009422   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:41.009428   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:41.012421   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:41.012445   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:41.012455   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:41.012466   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:41 GMT
	I0103 19:22:41.012474   30211 round_trippers.go:580]     Audit-Id: 00698eaa-7b35-4b2c-b327-15f9256f71bd
	I0103 19:22:41.012481   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:41.012489   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:41.012497   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:41.012650   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:41.509373   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:41.509400   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:41.509408   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:41.509414   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:41.512525   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:41.512576   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:41.512584   30211 round_trippers.go:580]     Audit-Id: 7e9f9cab-2b2f-430b-bf41-4a792e24f990
	I0103 19:22:41.512590   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:41.512597   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:41.512606   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:41.512614   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:41.512623   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:41 GMT
	I0103 19:22:41.512775   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:42.009180   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:42.009205   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:42.009214   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:42.009220   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:42.012033   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:42.012059   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:42.012071   30211 round_trippers.go:580]     Audit-Id: 4681016f-92aa-4d38-b383-90d305cf5190
	I0103 19:22:42.012078   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:42.012085   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:42.012094   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:42.012101   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:42.012113   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:42 GMT
	I0103 19:22:42.012339   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:42.012675   30211 node_ready.go:58] node "multinode-484895-m02" has status "Ready":"False"
	I0103 19:22:42.508672   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:42.508697   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:42.508707   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:42.508715   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:42.512344   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:42.512371   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:42.512381   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:42.512390   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:42.512399   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:42.512413   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:42.512426   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:42 GMT
	I0103 19:22:42.512434   30211 round_trippers.go:580]     Audit-Id: b5ac3c27-d12d-420b-a989-dd9b020df174
	I0103 19:22:42.512636   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:43.008922   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:43.008947   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:43.008958   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:43.008966   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:43.012581   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:43.012600   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:43.012607   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:43.012613   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:43.012618   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:43.012623   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:43.012634   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:43 GMT
	I0103 19:22:43.012646   30211 round_trippers.go:580]     Audit-Id: 5bca8d3b-e575-468d-8546-4eaf4ff19ca9
	I0103 19:22:43.012782   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:43.509505   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:43.509530   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:43.509538   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:43.509544   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:43.512172   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:43.512195   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:43.512202   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:43.512208   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:43 GMT
	I0103 19:22:43.512213   30211 round_trippers.go:580]     Audit-Id: dafc5a35-1644-46ca-af68-feec338be33a
	I0103 19:22:43.512218   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:43.512223   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:43.512230   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:43.512350   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:44.009009   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:44.009035   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.009044   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.009054   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.012744   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:44.012767   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.012774   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.012780   30211 round_trippers.go:580]     Audit-Id: 2f2286ee-6a06-41b0-8b22-18f1fc3bbd66
	I0103 19:22:44.012785   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.012790   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.012796   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.012801   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.012968   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"461","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0103 19:22:44.013355   30211 node_ready.go:58] node "multinode-484895-m02" has status "Ready":"False"
	I0103 19:22:44.509379   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:44.509405   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.509415   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.509423   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.511637   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:44.511660   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.511669   30211 round_trippers.go:580]     Audit-Id: d01b0d53-1cd0-4b2a-9ee6-f76af23658a4
	I0103 19:22:44.511677   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.511685   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.511693   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.511702   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.511711   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.511878   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"483","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I0103 19:22:44.512207   30211 node_ready.go:49] node "multinode-484895-m02" has status "Ready":"True"
	I0103 19:22:44.512226   30211 node_ready.go:38] duration metric: took 9.003829083s waiting for node "multinode-484895-m02" to be "Ready" ...
	I0103 19:22:44.512242   30211 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:22:44.512317   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:22:44.512329   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.512339   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.512349   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.516149   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:44.516170   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.516178   30211 round_trippers.go:580]     Audit-Id: 0da4127f-adc5-4cfe-8713-8f36750cbb64
	I0103 19:22:44.516183   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.516189   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.516194   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.516198   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.516204   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.517432   30211 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"484"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"400","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67364 chars]
	I0103 19:22:44.519510   30211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.519576   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:22:44.519584   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.519591   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.519597   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.521674   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:44.521688   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.521694   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.521699   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.521704   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.521709   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.521716   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.521724   30211 round_trippers.go:580]     Audit-Id: 8503a623-368a-43de-be5f-52115e6949d8
	I0103 19:22:44.521870   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"400","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0103 19:22:44.522251   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:44.522266   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.522275   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.522281   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.524385   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:44.524404   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.524411   30211 round_trippers.go:580]     Audit-Id: d4b552f1-900c-4778-8d38-85c16505e8ae
	I0103 19:22:44.524416   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.524421   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.524428   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.524437   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.524444   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.524551   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:44.524851   30211 pod_ready.go:92] pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:44.524861   30211 pod_ready.go:81] duration metric: took 5.330508ms waiting for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.524869   30211 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.524914   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484895
	I0103 19:22:44.524917   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.524924   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.524932   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.527277   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:44.527295   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.527302   30211 round_trippers.go:580]     Audit-Id: 24f8750c-ad6b-48b2-9f7f-4d7d3505815e
	I0103 19:22:44.527310   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.527318   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.527325   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.527334   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.527342   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.527443   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484895","namespace":"kube-system","uid":"2b5f9dc7-2d61-4968-9b9a-cfc029c9522b","resourceVersion":"358","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.191:2379","kubernetes.io/config.hash":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.mirror":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.seen":"2024-01-03T19:21:43.948366778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0103 19:22:44.527801   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:44.527815   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.527822   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.527827   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.530111   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:44.530130   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.530138   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.530144   30211 round_trippers.go:580]     Audit-Id: c5fa3cba-5205-42a0-8846-a6735d785448
	I0103 19:22:44.530149   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.530161   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.530169   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.530177   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.530310   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:44.530587   30211 pod_ready.go:92] pod "etcd-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:44.530602   30211 pod_ready.go:81] duration metric: took 5.728805ms waiting for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.530618   30211 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.530673   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484895
	I0103 19:22:44.530684   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.530691   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.530696   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.533517   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:44.533532   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.533538   30211 round_trippers.go:580]     Audit-Id: ec3c802b-9263-479a-bd84-eee57eec0f0a
	I0103 19:22:44.533544   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.533551   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.533559   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.533567   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.533576   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.534160   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484895","namespace":"kube-system","uid":"f9f36416-b761-4534-8e09-bc3c94813149","resourceVersion":"406","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.191:8443","kubernetes.io/config.hash":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.mirror":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.seen":"2024-01-03T19:21:43.948370781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0103 19:22:44.534686   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:44.534702   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.534713   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.534723   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.537448   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:44.537465   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.537473   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.537481   30211 round_trippers.go:580]     Audit-Id: cfe1c3bc-e40c-48a1-9849-b804c26505ac
	I0103 19:22:44.537488   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.537495   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.537505   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.537514   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.537633   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:44.537927   30211 pod_ready.go:92] pod "kube-apiserver-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:44.537944   30211 pod_ready.go:81] duration metric: took 7.316987ms waiting for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.537955   30211 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.538001   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:22:44.538010   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.538021   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.538031   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.539842   30211 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:22:44.539855   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.539870   30211 round_trippers.go:580]     Audit-Id: 0a3578d6-44b0-40da-9cb8-e61e9040a858
	I0103 19:22:44.539875   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.539880   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.539886   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.539894   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.539902   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.540119   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484895","namespace":"kube-system","uid":"a04de258-1f92-4ac7-8f30-18ad9ebb6d40","resourceVersion":"407","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.mirror":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.seen":"2024-01-03T19:21:43.948371847Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0103 19:22:44.540526   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:44.540541   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.540549   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.540555   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.542438   30211 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:22:44.542455   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.542463   30211 round_trippers.go:580]     Audit-Id: 708aa35a-aa82-4607-bf5f-454134dbe4f7
	I0103 19:22:44.542471   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.542478   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.542485   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.542493   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.542508   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.542711   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:44.543100   30211 pod_ready.go:92] pod "kube-controller-manager-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:44.543118   30211 pod_ready.go:81] duration metric: took 5.154472ms waiting for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.543130   30211 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.709492   30211 request.go:629] Waited for 166.277765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:22:44.709574   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:22:44.709582   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.709592   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.709602   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.712744   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:44.712767   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.712774   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.712779   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.712784   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.712790   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.712795   30211 round_trippers.go:580]     Audit-Id: e535744c-bea6-40f5-9a78-848f56a6b7b7
	I0103 19:22:44.712800   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.712936   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k7jnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3","resourceVersion":"470","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0103 19:22:44.909861   30211 request.go:629] Waited for 196.42168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:44.909933   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:22:44.909940   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:44.909951   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:44.909959   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:44.912578   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:44.912601   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:44.912612   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:44.912621   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:44 GMT
	I0103 19:22:44.912630   30211 round_trippers.go:580]     Audit-Id: c28f1c60-069f-4449-bc2c-2b0d532fb416
	I0103 19:22:44.912639   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:44.912646   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:44.912651   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:44.912763   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"483","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_22_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I0103 19:22:44.913118   30211 pod_ready.go:92] pod "kube-proxy-k7jnm" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:44.913139   30211 pod_ready.go:81] duration metric: took 370.000611ms waiting for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:44.913154   30211 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:45.110419   30211 request.go:629] Waited for 197.199742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:22:45.110497   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:22:45.110508   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:45.110538   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:45.110548   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:45.113213   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:45.113237   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:45.113249   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:45 GMT
	I0103 19:22:45.113257   30211 round_trippers.go:580]     Audit-Id: 02229688-274b-48dd-8f1d-ecefed56da26
	I0103 19:22:45.113264   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:45.113271   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:45.113279   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:45.113287   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:45.113516   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp9s2","generateName":"kube-proxy-","namespace":"kube-system","uid":"728b1db9-b145-4ad3-b366-7fd8306d7a2a","resourceVersion":"373","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0103 19:22:45.309415   30211 request.go:629] Waited for 195.380364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:45.309530   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:45.309546   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:45.309557   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:45.309566   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:45.313882   30211 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:22:45.313905   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:45.313915   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:45 GMT
	I0103 19:22:45.313922   30211 round_trippers.go:580]     Audit-Id: 7122754e-aacf-40a9-970d-1b009067597b
	I0103 19:22:45.313929   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:45.313935   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:45.313942   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:45.313950   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:45.314114   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:45.314564   30211 pod_ready.go:92] pod "kube-proxy-tp9s2" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:45.314590   30211 pod_ready.go:81] duration metric: took 401.424174ms waiting for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:45.314603   30211 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:45.510047   30211 request.go:629] Waited for 195.360666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:22:45.510122   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:22:45.510127   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:45.510137   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:45.510143   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:45.513292   30211 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:22:45.513316   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:45.513324   30211 round_trippers.go:580]     Audit-Id: 1e330565-60f2-4082-8d6c-ad07728ad8eb
	I0103 19:22:45.513330   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:45.513335   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:45.513340   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:45.513345   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:45.513351   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:45 GMT
	I0103 19:22:45.513506   30211 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484895","namespace":"kube-system","uid":"f981e6c0-1f4a-44ed-b043-c69ef28b4fa5","resourceVersion":"405","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.mirror":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.seen":"2024-01-03T19:21:43.948372698Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0103 19:22:45.710341   30211 request.go:629] Waited for 196.398951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:45.710450   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:22:45.710465   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:45.710477   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:45.710493   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:45.713415   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:45.713438   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:45.713450   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:45.713455   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:45.713460   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:45.713465   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:45.713470   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:45 GMT
	I0103 19:22:45.713475   30211 round_trippers.go:580]     Audit-Id: 71205952-d374-4867-848a-57b8fd4445cd
	I0103 19:22:45.714046   30211 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0103 19:22:45.714572   30211 pod_ready.go:92] pod "kube-scheduler-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:22:45.714596   30211 pod_ready.go:81] duration metric: took 399.984937ms waiting for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:22:45.714610   30211 pod_ready.go:38] duration metric: took 1.202355923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:22:45.714625   30211 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:22:45.714687   30211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:22:45.728463   30211 system_svc.go:56] duration metric: took 13.831449ms WaitForService to wait for kubelet.
	I0103 19:22:45.728498   30211 kubeadm.go:581] duration metric: took 10.2400495s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:22:45.728518   30211 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:22:45.909947   30211 request.go:629] Waited for 181.361266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes
	I0103 19:22:45.910019   30211 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes
	I0103 19:22:45.910024   30211 round_trippers.go:469] Request Headers:
	I0103 19:22:45.910035   30211 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:22:45.910041   30211 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:22:45.912952   30211 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:22:45.912975   30211 round_trippers.go:577] Response Headers:
	I0103 19:22:45.912981   30211 round_trippers.go:580]     Audit-Id: 61bc0fd2-404a-4b65-9190-455085c5175b
	I0103 19:22:45.912987   30211 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:22:45.912992   30211 round_trippers.go:580]     Content-Type: application/json
	I0103 19:22:45.912997   30211 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:22:45.913001   30211 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:22:45.913007   30211 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:22:45 GMT
	I0103 19:22:45.913434   30211 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"486"},"items":[{"metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"382","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10197 chars]
	I0103 19:22:45.914125   30211 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:22:45.914157   30211 node_conditions.go:123] node cpu capacity is 2
	I0103 19:22:45.914169   30211 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:22:45.914179   30211 node_conditions.go:123] node cpu capacity is 2
	I0103 19:22:45.914184   30211 node_conditions.go:105] duration metric: took 185.661511ms to run NodePressure ...
	I0103 19:22:45.914197   30211 start.go:228] waiting for startup goroutines ...
	I0103 19:22:45.914240   30211 start.go:242] writing updated cluster config ...
	I0103 19:22:45.914610   30211 ssh_runner.go:195] Run: rm -f paused
	I0103 19:22:45.960250   30211 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 19:22:45.962269   30211 out.go:177] * Done! kubectl is now configured to use "multinode-484895" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 19:21:11 UTC, ends at Wed 2024-01-03 19:22:53 UTC. --
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.581392778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704309773581343820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a198940b-b05c-419f-85b9-520db830aa30 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.581917028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d0ea77fb-2abb-4256-8953-914247bb8a97 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.581961999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d0ea77fb-2abb-4256-8953-914247bb8a97 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.582259765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2858805213aab5068588aa7397adbdc6974571f52236f1a7522e61ef34db6ecc,PodSandboxId:91f4636be9d7b3c8f5f34dd2d10c5e631332fd3344a5c248405e57b293f38c8b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704309769927674082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xlczw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 442f70d7-17de-4ec1-99e0-f13f530e2d0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca5df3d1,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa106c2bb4b8608d654b820a61d42974e7b299e32abc503e4dd7d69086b1e2d,PodSandboxId:39676dddce1eb7e1cef137b830829b0ba9b223b224e7093a06350934f0a87d76,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704309722283784871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wzsqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,},Annotations:map[string]string{io.kubernetes.container.hash: bc1d7ac1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8f673287963463b02903a8c0627a6bbd5143fe1cc1e957fe6637d364f6866f,PodSandboxId:7968a9be14498d2429b071929f21f7d5817c43e787524ab9468e8e5e5bac5c78,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704309722016902749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd91ff9cc4ce4be4f199e0a6b8b36456dcebb1468599331c6d3062dc8fc269d6,PodSandboxId:8ef056190f4051202a30b6d1c631e559d37a592bca6d53635f34441eaa7b3233,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704309719297453142,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gqgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8d4f9028-52ad-44dd-83be-0bb7cc590b7f,},Annotations:map[string]string{io.kubernetes.container.hash: a3804f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01905aab4541ba9ab49dbd9332788ff9edf8db06159eb002f962818c664386d9,PodSandboxId:63e198ba8b99731eda656db016ffbefc4a8e6d9db6c1c5abec65cb9caca2683d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704309717567529401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1db9-b145-4ad3-b366-7fd830
6d7a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9fa95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05db28465a0463d6cc1d538267e5cb81b85f193cfdd6ca69c6029bc3f40425e7,PodSandboxId:b88e8d3e0ce1c2306158299e91e81435cb6a2f6992b5a2f1168348749725f27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704309697199303475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc39430cce393fdab624e5093adf15c,},Annotations:map[string]string{io.kubernetes.
container.hash: 447693cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41eeb6c2fcedc40cc952f3d812791745d8c58cfb7d442db8ce2e14ed1d095444,PodSandboxId:2b8117e982c8e8ac5f1acc999894ffead26298b7f67d85c85f3c97799ca00d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704309697183208243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de4242735fdb53c42fed3daf21e4e5e,},Annotations:map[string]string{io.kubernetes.container.ha
sh: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172f042fa9aad954aedad6a4eeda5224faa15964398f546c557653c377e4ba55,PodSandboxId:2f0c5524b73f6a682ca2e076f52c4a21c18b229b1ff67e995768aadc602e84fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704309696865826664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 091c426717be69d480bcc59d28e953ce,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95bdf953a6043e0c3784d789f5fb39ee212a5c99f8dcef59ac3e65bb422e26f,PodSandboxId:f7dff7eaa860665c99fe1163792ab39c5da9a6f52be789365a9fd25e6dc1adc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704309696761900388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adb5a2561f637a585e38e2b73f2b809,},Annotations:map[string]string{io.kubernetes.
container.hash: 7933f556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d0ea77fb-2abb-4256-8953-914247bb8a97 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.618458323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cfa1b2d8-11f8-4d44-8a55-54e9e80f0061 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.618516281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cfa1b2d8-11f8-4d44-8a55-54e9e80f0061 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.619809805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=89b60701-1c59-4de8-bb9f-975d9744c2cb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.620261822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704309773620247928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=89b60701-1c59-4de8-bb9f-975d9744c2cb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.620725190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c4ed7f1d-fb8b-4327-abed-d799b829c309 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.620797548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c4ed7f1d-fb8b-4327-abed-d799b829c309 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.621044286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2858805213aab5068588aa7397adbdc6974571f52236f1a7522e61ef34db6ecc,PodSandboxId:91f4636be9d7b3c8f5f34dd2d10c5e631332fd3344a5c248405e57b293f38c8b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704309769927674082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xlczw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 442f70d7-17de-4ec1-99e0-f13f530e2d0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca5df3d1,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa106c2bb4b8608d654b820a61d42974e7b299e32abc503e4dd7d69086b1e2d,PodSandboxId:39676dddce1eb7e1cef137b830829b0ba9b223b224e7093a06350934f0a87d76,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704309722283784871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wzsqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,},Annotations:map[string]string{io.kubernetes.container.hash: bc1d7ac1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8f673287963463b02903a8c0627a6bbd5143fe1cc1e957fe6637d364f6866f,PodSandboxId:7968a9be14498d2429b071929f21f7d5817c43e787524ab9468e8e5e5bac5c78,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704309722016902749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd91ff9cc4ce4be4f199e0a6b8b36456dcebb1468599331c6d3062dc8fc269d6,PodSandboxId:8ef056190f4051202a30b6d1c631e559d37a592bca6d53635f34441eaa7b3233,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704309719297453142,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gqgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8d4f9028-52ad-44dd-83be-0bb7cc590b7f,},Annotations:map[string]string{io.kubernetes.container.hash: a3804f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01905aab4541ba9ab49dbd9332788ff9edf8db06159eb002f962818c664386d9,PodSandboxId:63e198ba8b99731eda656db016ffbefc4a8e6d9db6c1c5abec65cb9caca2683d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704309717567529401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1db9-b145-4ad3-b366-7fd830
6d7a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9fa95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05db28465a0463d6cc1d538267e5cb81b85f193cfdd6ca69c6029bc3f40425e7,PodSandboxId:b88e8d3e0ce1c2306158299e91e81435cb6a2f6992b5a2f1168348749725f27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704309697199303475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc39430cce393fdab624e5093adf15c,},Annotations:map[string]string{io.kubernetes.
container.hash: 447693cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41eeb6c2fcedc40cc952f3d812791745d8c58cfb7d442db8ce2e14ed1d095444,PodSandboxId:2b8117e982c8e8ac5f1acc999894ffead26298b7f67d85c85f3c97799ca00d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704309697183208243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de4242735fdb53c42fed3daf21e4e5e,},Annotations:map[string]string{io.kubernetes.container.ha
sh: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172f042fa9aad954aedad6a4eeda5224faa15964398f546c557653c377e4ba55,PodSandboxId:2f0c5524b73f6a682ca2e076f52c4a21c18b229b1ff67e995768aadc602e84fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704309696865826664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 091c426717be69d480bcc59d28e953ce,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95bdf953a6043e0c3784d789f5fb39ee212a5c99f8dcef59ac3e65bb422e26f,PodSandboxId:f7dff7eaa860665c99fe1163792ab39c5da9a6f52be789365a9fd25e6dc1adc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704309696761900388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adb5a2561f637a585e38e2b73f2b809,},Annotations:map[string]string{io.kubernetes.
container.hash: 7933f556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c4ed7f1d-fb8b-4327-abed-d799b829c309 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.655154408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=458815ef-567d-4bf3-b3fe-28ef6bb37740 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.655213042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=458815ef-567d-4bf3-b3fe-28ef6bb37740 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.656041231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7a1c5d7f-7542-4c6d-83d7-f1e7fd1424f9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.656477861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704309773656464500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7a1c5d7f-7542-4c6d-83d7-f1e7fd1424f9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.656933078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a742296d-3886-4bb6-912e-a8ab8829a5b0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.656974497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a742296d-3886-4bb6-912e-a8ab8829a5b0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.657266489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2858805213aab5068588aa7397adbdc6974571f52236f1a7522e61ef34db6ecc,PodSandboxId:91f4636be9d7b3c8f5f34dd2d10c5e631332fd3344a5c248405e57b293f38c8b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704309769927674082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xlczw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 442f70d7-17de-4ec1-99e0-f13f530e2d0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca5df3d1,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa106c2bb4b8608d654b820a61d42974e7b299e32abc503e4dd7d69086b1e2d,PodSandboxId:39676dddce1eb7e1cef137b830829b0ba9b223b224e7093a06350934f0a87d76,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704309722283784871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wzsqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,},Annotations:map[string]string{io.kubernetes.container.hash: bc1d7ac1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8f673287963463b02903a8c0627a6bbd5143fe1cc1e957fe6637d364f6866f,PodSandboxId:7968a9be14498d2429b071929f21f7d5817c43e787524ab9468e8e5e5bac5c78,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704309722016902749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd91ff9cc4ce4be4f199e0a6b8b36456dcebb1468599331c6d3062dc8fc269d6,PodSandboxId:8ef056190f4051202a30b6d1c631e559d37a592bca6d53635f34441eaa7b3233,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704309719297453142,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gqgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8d4f9028-52ad-44dd-83be-0bb7cc590b7f,},Annotations:map[string]string{io.kubernetes.container.hash: a3804f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01905aab4541ba9ab49dbd9332788ff9edf8db06159eb002f962818c664386d9,PodSandboxId:63e198ba8b99731eda656db016ffbefc4a8e6d9db6c1c5abec65cb9caca2683d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704309717567529401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1db9-b145-4ad3-b366-7fd830
6d7a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9fa95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05db28465a0463d6cc1d538267e5cb81b85f193cfdd6ca69c6029bc3f40425e7,PodSandboxId:b88e8d3e0ce1c2306158299e91e81435cb6a2f6992b5a2f1168348749725f27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704309697199303475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc39430cce393fdab624e5093adf15c,},Annotations:map[string]string{io.kubernetes.
container.hash: 447693cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41eeb6c2fcedc40cc952f3d812791745d8c58cfb7d442db8ce2e14ed1d095444,PodSandboxId:2b8117e982c8e8ac5f1acc999894ffead26298b7f67d85c85f3c97799ca00d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704309697183208243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de4242735fdb53c42fed3daf21e4e5e,},Annotations:map[string]string{io.kubernetes.container.ha
sh: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172f042fa9aad954aedad6a4eeda5224faa15964398f546c557653c377e4ba55,PodSandboxId:2f0c5524b73f6a682ca2e076f52c4a21c18b229b1ff67e995768aadc602e84fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704309696865826664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 091c426717be69d480bcc59d28e953ce,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95bdf953a6043e0c3784d789f5fb39ee212a5c99f8dcef59ac3e65bb422e26f,PodSandboxId:f7dff7eaa860665c99fe1163792ab39c5da9a6f52be789365a9fd25e6dc1adc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704309696761900388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adb5a2561f637a585e38e2b73f2b809,},Annotations:map[string]string{io.kubernetes.
container.hash: 7933f556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a742296d-3886-4bb6-912e-a8ab8829a5b0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.692111064Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c62daa51-5a72-4f4a-a331-04144cbc8bdf name=/runtime.v1.RuntimeService/Version
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.692192516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c62daa51-5a72-4f4a-a331-04144cbc8bdf name=/runtime.v1.RuntimeService/Version
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.693915566Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=aaed3c37-6603-4270-a9dd-4b3482c350e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.694392041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704309773694376507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=aaed3c37-6603-4270-a9dd-4b3482c350e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.695014591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f9c12d42-ff8e-48cc-91e0-681ba24e7583 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.695140323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f9c12d42-ff8e-48cc-91e0-681ba24e7583 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:22:53 multinode-484895 crio[713]: time="2024-01-03 19:22:53.695347699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2858805213aab5068588aa7397adbdc6974571f52236f1a7522e61ef34db6ecc,PodSandboxId:91f4636be9d7b3c8f5f34dd2d10c5e631332fd3344a5c248405e57b293f38c8b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704309769927674082,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xlczw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 442f70d7-17de-4ec1-99e0-f13f530e2d0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca5df3d1,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa106c2bb4b8608d654b820a61d42974e7b299e32abc503e4dd7d69086b1e2d,PodSandboxId:39676dddce1eb7e1cef137b830829b0ba9b223b224e7093a06350934f0a87d76,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704309722283784871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wzsqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,},Annotations:map[string]string{io.kubernetes.container.hash: bc1d7ac1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8f673287963463b02903a8c0627a6bbd5143fe1cc1e957fe6637d364f6866f,PodSandboxId:7968a9be14498d2429b071929f21f7d5817c43e787524ab9468e8e5e5bac5c78,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704309722016902749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd91ff9cc4ce4be4f199e0a6b8b36456dcebb1468599331c6d3062dc8fc269d6,PodSandboxId:8ef056190f4051202a30b6d1c631e559d37a592bca6d53635f34441eaa7b3233,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704309719297453142,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gqgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8d4f9028-52ad-44dd-83be-0bb7cc590b7f,},Annotations:map[string]string{io.kubernetes.container.hash: a3804f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01905aab4541ba9ab49dbd9332788ff9edf8db06159eb002f962818c664386d9,PodSandboxId:63e198ba8b99731eda656db016ffbefc4a8e6d9db6c1c5abec65cb9caca2683d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704309717567529401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1db9-b145-4ad3-b366-7fd830
6d7a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9fa95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05db28465a0463d6cc1d538267e5cb81b85f193cfdd6ca69c6029bc3f40425e7,PodSandboxId:b88e8d3e0ce1c2306158299e91e81435cb6a2f6992b5a2f1168348749725f27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704309697199303475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc39430cce393fdab624e5093adf15c,},Annotations:map[string]string{io.kubernetes.
container.hash: 447693cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41eeb6c2fcedc40cc952f3d812791745d8c58cfb7d442db8ce2e14ed1d095444,PodSandboxId:2b8117e982c8e8ac5f1acc999894ffead26298b7f67d85c85f3c97799ca00d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704309697183208243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de4242735fdb53c42fed3daf21e4e5e,},Annotations:map[string]string{io.kubernetes.container.ha
sh: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172f042fa9aad954aedad6a4eeda5224faa15964398f546c557653c377e4ba55,PodSandboxId:2f0c5524b73f6a682ca2e076f52c4a21c18b229b1ff67e995768aadc602e84fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704309696865826664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 091c426717be69d480bcc59d28e953ce,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95bdf953a6043e0c3784d789f5fb39ee212a5c99f8dcef59ac3e65bb422e26f,PodSandboxId:f7dff7eaa860665c99fe1163792ab39c5da9a6f52be789365a9fd25e6dc1adc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704309696761900388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adb5a2561f637a585e38e2b73f2b809,},Annotations:map[string]string{io.kubernetes.
container.hash: 7933f556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f9c12d42-ff8e-48cc-91e0-681ba24e7583 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2858805213aab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   91f4636be9d7b       busybox-5bc68d56bd-xlczw
	faa106c2bb4b8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      51 seconds ago       Running             coredns                   0                   39676dddce1eb       coredns-5dd5756b68-wzsqb
	7d8f673287963       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      51 seconds ago       Running             storage-provisioner       0                   7968a9be14498       storage-provisioner
	dd91ff9cc4ce4       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      54 seconds ago       Running             kindnet-cni               0                   8ef056190f405       kindnet-gqgk2
	01905aab4541b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      56 seconds ago       Running             kube-proxy                0                   63e198ba8b997       kube-proxy-tp9s2
	05db28465a046       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   b88e8d3e0ce1c       etcd-multinode-484895
	41eeb6c2fcedc       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   2b8117e982c8e       kube-scheduler-multinode-484895
	172f042fa9aad       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   2f0c5524b73f6       kube-controller-manager-multinode-484895
	b95bdf953a604       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   f7dff7eaa8606       kube-apiserver-multinode-484895
	
	
	==> coredns [faa106c2bb4b8608d654b820a61d42974e7b299e32abc503e4dd7d69086b1e2d] <==
	[INFO] 10.244.1.2:38710 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119675s
	[INFO] 10.244.0.3:46236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133027s
	[INFO] 10.244.0.3:45128 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002108748s
	[INFO] 10.244.0.3:43626 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008596s
	[INFO] 10.244.0.3:56318 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000042709s
	[INFO] 10.244.0.3:36358 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001554359s
	[INFO] 10.244.0.3:35576 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154711s
	[INFO] 10.244.0.3:43104 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071303s
	[INFO] 10.244.0.3:40294 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075075s
	[INFO] 10.244.1.2:45014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134354s
	[INFO] 10.244.1.2:39655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168465s
	[INFO] 10.244.1.2:52188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009951s
	[INFO] 10.244.1.2:41762 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162955s
	[INFO] 10.244.0.3:48470 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000070607s
	[INFO] 10.244.0.3:54041 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114915s
	[INFO] 10.244.0.3:47082 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037373s
	[INFO] 10.244.0.3:53125 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005s
	[INFO] 10.244.1.2:56356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149376s
	[INFO] 10.244.1.2:49901 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000229027s
	[INFO] 10.244.1.2:42996 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126037s
	[INFO] 10.244.1.2:44761 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129243s
	[INFO] 10.244.0.3:59882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099792s
	[INFO] 10.244.0.3:44173 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110186s
	[INFO] 10.244.0.3:53876 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065854s
	[INFO] 10.244.0.3:41091 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061763s
	
	
	==> describe nodes <==
	Name:               multinode-484895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=multinode-484895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T19_21_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:21:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-484895
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:22:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:22:01 +0000   Wed, 03 Jan 2024 19:21:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:22:01 +0000   Wed, 03 Jan 2024 19:21:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:22:01 +0000   Wed, 03 Jan 2024 19:21:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:22:01 +0000   Wed, 03 Jan 2024 19:22:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    multinode-484895
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5c89e44ca554cc2a8a70afbb74e5669
	  System UUID:                e5c89e44-ca55-4cc2-a8a7-0afbb74e5669
	  Boot ID:                    6f0bad14-0c2a-4ad0-be0a-cd57cd088ffd
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xlczw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-wzsqb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     57s
	  kube-system                 etcd-multinode-484895                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 kindnet-gqgk2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      57s
	  kube-system                 kube-apiserver-multinode-484895             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-multinode-484895    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-proxy-tp9s2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-multinode-484895             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 56s   kube-proxy       
	  Normal  Starting                 70s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s   kubelet          Node multinode-484895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s   kubelet          Node multinode-484895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s   kubelet          Node multinode-484895 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           57s   node-controller  Node multinode-484895 event: Registered Node multinode-484895 in Controller
	  Normal  NodeReady                52s   kubelet          Node multinode-484895 status is now: NodeReady
	
	
	Name:               multinode-484895-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484895-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=multinode-484895
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_03T19_22_35_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:22:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-484895-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:22:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:22:44 +0000   Wed, 03 Jan 2024 19:22:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:22:44 +0000   Wed, 03 Jan 2024 19:22:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:22:44 +0000   Wed, 03 Jan 2024 19:22:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:22:44 +0000   Wed, 03 Jan 2024 19:22:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    multinode-484895-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e24fb0b823cc45c0b97314958a26978c
	  System UUID:                e24fb0b8-23cc-45c0-b973-14958a26978c
	  Boot ID:                    220da2d3-eb9e-489d-9d50-c1c84cedcbb3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-lmcnh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-lfkpk               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19s
	  kube-system                 kube-proxy-k7jnm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientMemory  19s (x5 over 21s)  kubelet          Node multinode-484895-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x5 over 21s)  kubelet          Node multinode-484895-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x5 over 21s)  kubelet          Node multinode-484895-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17s                node-controller  Node multinode-484895-m02 event: Registered Node multinode-484895-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-484895-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan 3 19:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062466] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.300150] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.656871] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.126909] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.980851] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.914451] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.098177] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.141716] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.122991] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.212156] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[  +9.467847] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +8.264809] systemd-fstab-generator[1256]: Ignoring "noauto" for root device
	[Jan 3 19:22] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [05db28465a0463d6cc1d538267e5cb81b85f193cfdd6ca69c6029bc3f40425e7] <==
	{"level":"info","ts":"2024-01-03T19:21:39.163644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became candidate at term 2"}
	{"level":"info","ts":"2024-01-03T19:21:39.163673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 received MsgVoteResp from f21a8e08563785d2 at term 2"}
	{"level":"info","ts":"2024-01-03T19:21:39.163767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became leader at term 2"}
	{"level":"info","ts":"2024-01-03T19:21:39.163796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f21a8e08563785d2 elected leader f21a8e08563785d2 at term 2"}
	{"level":"info","ts":"2024-01-03T19:21:39.165458Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f21a8e08563785d2","local-member-attributes":"{Name:multinode-484895 ClientURLs:[https://192.168.39.191:2379]}","request-path":"/0/members/f21a8e08563785d2/attributes","cluster-id":"78cc5c67b96828b5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T19:21:39.165752Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:21:39.16613Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:21:39.166507Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T19:21:39.166607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T19:21:39.166742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:21:39.167261Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T19:21:39.16926Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.191:2379"}
	{"level":"info","ts":"2024-01-03T19:21:39.169763Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"78cc5c67b96828b5","local-member-id":"f21a8e08563785d2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:21:39.169998Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:21:39.170147Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-01-03T19:22:36.527408Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.348913ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9642924580459878031 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-484895-m02\" mod_revision:458 > success:<request_put:<key:\"/registry/minions/multinode-484895-m02\" value_size:2946 >> failure:<request_range:<key:\"/registry/minions/multinode-484895-m02\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-03T19:22:36.528994Z","caller":"traceutil/trace.go:171","msg":"trace[1839666360] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"151.634366ms","start":"2024-01-03T19:22:36.377339Z","end":"2024-01-03T19:22:36.528974Z","steps":["trace[1839666360] 'process raft request'  (duration: 151.5653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:22:36.529231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.580038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-03T19:22:36.529276Z","caller":"traceutil/trace.go:171","msg":"trace[679828744] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:462; }","duration":"168.680011ms","start":"2024-01-03T19:22:36.36059Z","end":"2024-01-03T19:22:36.52927Z","steps":["trace[679828744] 'agreement among raft nodes before linearized reading'  (duration: 168.52608ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:22:36.52903Z","caller":"traceutil/trace.go:171","msg":"trace[518265016] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"360.001353ms","start":"2024-01-03T19:22:36.169013Z","end":"2024-01-03T19:22:36.529014Z","steps":["trace[518265016] 'process raft request'  (duration: 181.511111ms)","trace[518265016] 'compare'  (duration: 176.218036ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T19:22:36.529391Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T19:22:36.168986Z","time spent":"360.362802ms","remote":"127.0.0.1:51940","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2992,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-484895-m02\" mod_revision:458 > success:<request_put:<key:\"/registry/minions/multinode-484895-m02\" value_size:2946 >> failure:<request_range:<key:\"/registry/minions/multinode-484895-m02\" > >"}
	{"level":"info","ts":"2024-01-03T19:22:36.529104Z","caller":"traceutil/trace.go:171","msg":"trace[1457920908] linearizableReadLoop","detail":"{readStateIndex:479; appliedIndex:478; }","duration":"168.483434ms","start":"2024-01-03T19:22:36.360614Z","end":"2024-01-03T19:22:36.529097Z","steps":["trace[1457920908] 'read index received'  (duration: 25.399µs)","trace[1457920908] 'applied index is now lower than readState.Index'  (duration: 168.456807ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T19:22:36.529673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.459967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-01-03T19:22:36.52972Z","caller":"traceutil/trace.go:171","msg":"trace[287663191] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:462; }","duration":"119.508079ms","start":"2024-01-03T19:22:36.410201Z","end":"2024-01-03T19:22:36.529709Z","steps":["trace[287663191] 'agreement among raft nodes before linearized reading'  (duration: 119.430821ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:22:36.689427Z","caller":"traceutil/trace.go:171","msg":"trace[14423177] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"155.072013ms","start":"2024-01-03T19:22:36.534341Z","end":"2024-01-03T19:22:36.689413Z","steps":["trace[14423177] 'process raft request'  (duration: 154.099995ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:22:54 up 1 min,  0 users,  load average: 0.41, 0.17, 0.06
	Linux multinode-484895 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [dd91ff9cc4ce4be4f199e0a6b8b36456dcebb1468599331c6d3062dc8fc269d6] <==
	I0103 19:22:00.053009       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0103 19:22:00.053132       1 main.go:107] hostIP = 192.168.39.191
	podIP = 192.168.39.191
	I0103 19:22:00.053378       1 main.go:116] setting mtu 1500 for CNI 
	I0103 19:22:00.053410       1 main.go:146] kindnetd IP family: "ipv4"
	I0103 19:22:00.053432       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0103 19:22:00.647990       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:22:00.648131       1 main.go:227] handling current node
	I0103 19:22:10.664402       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:22:10.664487       1 main.go:227] handling current node
	I0103 19:22:20.673991       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:22:20.674159       1 main.go:227] handling current node
	I0103 19:22:30.679381       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:22:30.679429       1 main.go:227] handling current node
	I0103 19:22:40.686990       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:22:40.687042       1 main.go:227] handling current node
	I0103 19:22:40.687111       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0103 19:22:40.687120       1 main.go:250] Node multinode-484895-m02 has CIDR [10.244.1.0/24] 
	I0103 19:22:40.687315       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.86 Flags: [] Table: 0} 
	I0103 19:22:50.693226       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:22:50.693271       1 main.go:227] handling current node
	I0103 19:22:50.693282       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0103 19:22:50.693288       1 main.go:250] Node multinode-484895-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b95bdf953a6043e0c3784d789f5fb39ee212a5c99f8dcef59ac3e65bb422e26f] <==
	I0103 19:21:40.768317       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0103 19:21:40.791410       1 controller.go:624] quota admission added evaluator for: namespaces
	I0103 19:21:40.810685       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 19:21:40.810773       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 19:21:40.814161       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0103 19:21:40.814248       1 aggregator.go:166] initial CRD sync complete...
	I0103 19:21:40.814274       1 autoregister_controller.go:141] Starting autoregister controller
	I0103 19:21:40.814281       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0103 19:21:40.814286       1 cache.go:39] Caches are synced for autoregister controller
	I0103 19:21:40.825168       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 19:21:41.609017       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0103 19:21:41.613849       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0103 19:21:41.613882       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0103 19:21:42.242255       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 19:21:42.282538       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0103 19:21:42.330737       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0103 19:21:42.340498       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.191]
	I0103 19:21:42.341473       1 controller.go:624] quota admission added evaluator for: endpoints
	I0103 19:21:42.347585       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0103 19:21:42.688657       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0103 19:21:43.809595       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0103 19:21:43.826740       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0103 19:21:43.841630       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0103 19:21:56.200575       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0103 19:21:56.401015       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [172f042fa9aad954aedad6a4eeda5224faa15964398f546c557653c377e4ba55] <==
	I0103 19:21:56.937017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="224.73495ms"
	I0103 19:21:57.030798       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.662128ms"
	I0103 19:21:57.030943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.54µs"
	I0103 19:22:01.234889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.256µs"
	I0103 19:22:01.262856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.335µs"
	I0103 19:22:03.204652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.206962ms"
	I0103 19:22:03.204785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.006µs"
	I0103 19:22:06.154166       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0103 19:22:34.316228       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-484895-m02\" does not exist"
	I0103 19:22:34.328234       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-484895-m02" podCIDRs=["10.244.1.0/24"]
	I0103 19:22:34.347543       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lfkpk"
	I0103 19:22:34.347588       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-k7jnm"
	I0103 19:22:36.159626       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-484895-m02"
	I0103 19:22:36.159830       1 event.go:307] "Event occurred" object="multinode-484895-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-484895-m02 event: Registered Node multinode-484895-m02 in Controller"
	I0103 19:22:44.464306       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484895-m02"
	I0103 19:22:46.638236       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0103 19:22:46.654030       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-lmcnh"
	I0103 19:22:46.669178       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-xlczw"
	I0103 19:22:46.684922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.382188ms"
	I0103 19:22:46.715414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="30.240266ms"
	I0103 19:22:46.715682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="115.765µs"
	I0103 19:22:49.946049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.726569ms"
	I0103 19:22:49.946340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="98.658µs"
	I0103 19:22:50.351350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="14.281009ms"
	I0103 19:22:50.351533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="113.889µs"
	
	
	==> kube-proxy [01905aab4541ba9ab49dbd9332788ff9edf8db06159eb002f962818c664386d9] <==
	I0103 19:21:57.812380       1 server_others.go:69] "Using iptables proxy"
	I0103 19:21:57.826938       1 node.go:141] Successfully retrieved node IP: 192.168.39.191
	I0103 19:21:57.877466       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0103 19:21:57.877561       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 19:21:57.880017       1 server_others.go:152] "Using iptables Proxier"
	I0103 19:21:57.880192       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 19:21:57.880494       1 server.go:846] "Version info" version="v1.28.4"
	I0103 19:21:57.880539       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:21:57.882543       1 config.go:188] "Starting service config controller"
	I0103 19:21:57.882669       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 19:21:57.882776       1 config.go:97] "Starting endpoint slice config controller"
	I0103 19:21:57.882801       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 19:21:57.885988       1 config.go:315] "Starting node config controller"
	I0103 19:21:57.886024       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 19:21:57.983156       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 19:21:57.983178       1 shared_informer.go:318] Caches are synced for service config
	I0103 19:21:57.986576       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [41eeb6c2fcedc40cc952f3d812791745d8c58cfb7d442db8ce2e14ed1d095444] <==
	W0103 19:21:40.806121       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0103 19:21:40.806182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0103 19:21:40.806290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 19:21:40.806325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0103 19:21:40.808131       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 19:21:40.808195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0103 19:21:41.660036       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 19:21:41.660181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0103 19:21:41.708460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 19:21:41.708508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0103 19:21:41.762860       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 19:21:41.762988       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 19:21:41.787280       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 19:21:41.787371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0103 19:21:41.890152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 19:21:41.890200       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0103 19:21:41.936001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 19:21:41.936049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0103 19:21:41.965845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 19:21:41.965944       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0103 19:21:41.984996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0103 19:21:41.985046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0103 19:21:42.030961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0103 19:21:42.031009       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0103 19:21:43.458739       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 19:21:11 UTC, ends at Wed 2024-01-03 19:22:54 UTC. --
	Jan 03 19:21:56 multinode-484895 kubelet[1263]: I0103 19:21:56.269208    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72fjb\" (UniqueName: \"kubernetes.io/projected/8d4f9028-52ad-44dd-83be-0bb7cc590b7f-kube-api-access-72fjb\") pod \"kindnet-gqgk2\" (UID: \"8d4f9028-52ad-44dd-83be-0bb7cc590b7f\") " pod="kube-system/kindnet-gqgk2"
	Jan 03 19:21:56 multinode-484895 kubelet[1263]: I0103 19:21:56.269269    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d4f9028-52ad-44dd-83be-0bb7cc590b7f-xtables-lock\") pod \"kindnet-gqgk2\" (UID: \"8d4f9028-52ad-44dd-83be-0bb7cc590b7f\") " pod="kube-system/kindnet-gqgk2"
	Jan 03 19:21:56 multinode-484895 kubelet[1263]: I0103 19:21:56.269291    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d4f9028-52ad-44dd-83be-0bb7cc590b7f-lib-modules\") pod \"kindnet-gqgk2\" (UID: \"8d4f9028-52ad-44dd-83be-0bb7cc590b7f\") " pod="kube-system/kindnet-gqgk2"
	Jan 03 19:21:56 multinode-484895 kubelet[1263]: I0103 19:21:56.269311    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8d4f9028-52ad-44dd-83be-0bb7cc590b7f-cni-cfg\") pod \"kindnet-gqgk2\" (UID: \"8d4f9028-52ad-44dd-83be-0bb7cc590b7f\") " pod="kube-system/kindnet-gqgk2"
	Jan 03 19:21:56 multinode-484895 kubelet[1263]: I0103 19:21:56.370277    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/728b1db9-b145-4ad3-b366-7fd8306d7a2a-kube-proxy\") pod \"kube-proxy-tp9s2\" (UID: \"728b1db9-b145-4ad3-b366-7fd8306d7a2a\") " pod="kube-system/kube-proxy-tp9s2"
	Jan 03 19:21:56 multinode-484895 kubelet[1263]: I0103 19:21:56.370357    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5lr6\" (UniqueName: \"kubernetes.io/projected/728b1db9-b145-4ad3-b366-7fd8306d7a2a-kube-api-access-s5lr6\") pod \"kube-proxy-tp9s2\" (UID: \"728b1db9-b145-4ad3-b366-7fd8306d7a2a\") " pod="kube-system/kube-proxy-tp9s2"
	Jan 03 19:21:56 multinode-484895 kubelet[1263]: I0103 19:21:56.370439    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/728b1db9-b145-4ad3-b366-7fd8306d7a2a-xtables-lock\") pod \"kube-proxy-tp9s2\" (UID: \"728b1db9-b145-4ad3-b366-7fd8306d7a2a\") " pod="kube-system/kube-proxy-tp9s2"
	Jan 03 19:21:56 multinode-484895 kubelet[1263]: I0103 19:21:56.370470    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/728b1db9-b145-4ad3-b366-7fd8306d7a2a-lib-modules\") pod \"kube-proxy-tp9s2\" (UID: \"728b1db9-b145-4ad3-b366-7fd8306d7a2a\") " pod="kube-system/kube-proxy-tp9s2"
	Jan 03 19:22:00 multinode-484895 kubelet[1263]: I0103 19:22:00.165372    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tp9s2" podStartSLOduration=4.165338004 podCreationTimestamp="2024-01-03 19:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 19:21:58.160020324 +0000 UTC m=+14.364975259" watchObservedRunningTime="2024-01-03 19:22:00.165338004 +0000 UTC m=+16.370292939"
	Jan 03 19:22:01 multinode-484895 kubelet[1263]: I0103 19:22:01.188929    1263 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 03 19:22:01 multinode-484895 kubelet[1263]: I0103 19:22:01.231023    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gqgk2" podStartSLOduration=5.23099024 podCreationTimestamp="2024-01-03 19:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 19:22:00.165605374 +0000 UTC m=+16.370560308" watchObservedRunningTime="2024-01-03 19:22:01.23099024 +0000 UTC m=+17.435945174"
	Jan 03 19:22:01 multinode-484895 kubelet[1263]: I0103 19:22:01.231273    1263 topology_manager.go:215] "Topology Admit Handler" podUID="9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa" podNamespace="kube-system" podName="coredns-5dd5756b68-wzsqb"
	Jan 03 19:22:01 multinode-484895 kubelet[1263]: I0103 19:22:01.241749    1263 topology_manager.go:215] "Topology Admit Handler" podUID="82edd1c3-f361-4f86-8d59-8b89193d7a31" podNamespace="kube-system" podName="storage-provisioner"
	Jan 03 19:22:01 multinode-484895 kubelet[1263]: I0103 19:22:01.309393    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5p8\" (UniqueName: \"kubernetes.io/projected/9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa-kube-api-access-rn5p8\") pod \"coredns-5dd5756b68-wzsqb\" (UID: \"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa\") " pod="kube-system/coredns-5dd5756b68-wzsqb"
	Jan 03 19:22:01 multinode-484895 kubelet[1263]: I0103 19:22:01.309788    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa-config-volume\") pod \"coredns-5dd5756b68-wzsqb\" (UID: \"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa\") " pod="kube-system/coredns-5dd5756b68-wzsqb"
	Jan 03 19:22:01 multinode-484895 kubelet[1263]: I0103 19:22:01.309850    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/82edd1c3-f361-4f86-8d59-8b89193d7a31-tmp\") pod \"storage-provisioner\" (UID: \"82edd1c3-f361-4f86-8d59-8b89193d7a31\") " pod="kube-system/storage-provisioner"
	Jan 03 19:22:01 multinode-484895 kubelet[1263]: I0103 19:22:01.309870    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wblsw\" (UniqueName: \"kubernetes.io/projected/82edd1c3-f361-4f86-8d59-8b89193d7a31-kube-api-access-wblsw\") pod \"storage-provisioner\" (UID: \"82edd1c3-f361-4f86-8d59-8b89193d7a31\") " pod="kube-system/storage-provisioner"
	Jan 03 19:22:03 multinode-484895 kubelet[1263]: I0103 19:22:03.189919    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.189882552 podCreationTimestamp="2024-01-03 19:21:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 19:22:03.176793581 +0000 UTC m=+19.381748517" watchObservedRunningTime="2024-01-03 19:22:03.189882552 +0000 UTC m=+19.394837505"
	Jan 03 19:22:04 multinode-484895 kubelet[1263]: I0103 19:22:04.027364    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wzsqb" podStartSLOduration=8.027326554 podCreationTimestamp="2024-01-03 19:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 19:22:03.191134912 +0000 UTC m=+19.396089846" watchObservedRunningTime="2024-01-03 19:22:04.027326554 +0000 UTC m=+20.232281488"
	Jan 03 19:22:44 multinode-484895 kubelet[1263]: E0103 19:22:44.046617    1263 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 19:22:44 multinode-484895 kubelet[1263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 19:22:44 multinode-484895 kubelet[1263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 19:22:44 multinode-484895 kubelet[1263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 19:22:46 multinode-484895 kubelet[1263]: I0103 19:22:46.682416    1263 topology_manager.go:215] "Topology Admit Handler" podUID="442f70d7-17de-4ec1-99e0-f13f530e2d0f" podNamespace="default" podName="busybox-5bc68d56bd-xlczw"
	Jan 03 19:22:46 multinode-484895 kubelet[1263]: I0103 19:22:46.762300    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnjfk\" (UniqueName: \"kubernetes.io/projected/442f70d7-17de-4ec1-99e0-f13f530e2d0f-kube-api-access-mnjfk\") pod \"busybox-5bc68d56bd-xlczw\" (UID: \"442f70d7-17de-4ec1-99e0-f13f530e2d0f\") " pod="default/busybox-5bc68d56bd-xlczw"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-484895 -n multinode-484895
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-484895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (687.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-484895
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-484895
E0103 19:24:34.787715   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:25:48.654640   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:25:55.308672   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-484895: exit status 82 (2m0.965456867s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-484895"  ...
	* Stopping node "multinode-484895"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-484895" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-484895 --wait=true -v=8 --alsologtostderr
E0103 19:27:18.356186   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:29:07.103795   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:30:48.654067   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:30:55.308181   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:32:11.701713   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:34:07.102935   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:35:30.149173   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-484895 --wait=true -v=8 --alsologtostderr: (9m23.258337962s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-484895
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-484895 -n multinode-484895
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-484895 logs -n 25: (1.577170613s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-484895 ssh -n                                                                 | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-484895 cp multinode-484895-m02:/home/docker/cp-test.txt                       | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1872919329/001/cp-test_multinode-484895-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n                                                                 | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-484895 cp multinode-484895-m02:/home/docker/cp-test.txt                       | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895:/home/docker/cp-test_multinode-484895-m02_multinode-484895.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n                                                                 | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n multinode-484895 sudo cat                                       | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | /home/docker/cp-test_multinode-484895-m02_multinode-484895.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-484895 cp multinode-484895-m02:/home/docker/cp-test.txt                       | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m03:/home/docker/cp-test_multinode-484895-m02_multinode-484895-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n                                                                 | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n multinode-484895-m03 sudo cat                                   | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | /home/docker/cp-test_multinode-484895-m02_multinode-484895-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-484895 cp testdata/cp-test.txt                                                | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n                                                                 | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-484895 cp multinode-484895-m03:/home/docker/cp-test.txt                       | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1872919329/001/cp-test_multinode-484895-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n                                                                 | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-484895 cp multinode-484895-m03:/home/docker/cp-test.txt                       | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895:/home/docker/cp-test_multinode-484895-m03_multinode-484895.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n                                                                 | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n multinode-484895 sudo cat                                       | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | /home/docker/cp-test_multinode-484895-m03_multinode-484895.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-484895 cp multinode-484895-m03:/home/docker/cp-test.txt                       | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m02:/home/docker/cp-test_multinode-484895-m03_multinode-484895-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n                                                                 | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | multinode-484895-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-484895 ssh -n multinode-484895-m02 sudo cat                                   | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	|         | /home/docker/cp-test_multinode-484895-m03_multinode-484895-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-484895 node stop m03                                                          | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:23 UTC |
	| node    | multinode-484895 node start                                                             | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:23 UTC | 03 Jan 24 19:24 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-484895                                                                | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:24 UTC |                     |
	| stop    | -p multinode-484895                                                                     | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:24 UTC |                     |
	| start   | -p multinode-484895                                                                     | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:26 UTC | 03 Jan 24 19:35 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-484895                                                                | multinode-484895 | jenkins | v1.32.0 | 03 Jan 24 19:35 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:26:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:26:19.477753   33509 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:26:19.477894   33509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:26:19.477913   33509 out.go:309] Setting ErrFile to fd 2...
	I0103 19:26:19.477919   33509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:26:19.478115   33509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:26:19.478681   33509 out.go:303] Setting JSON to false
	I0103 19:26:19.479529   33509 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4127,"bootTime":1704305853,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:26:19.479587   33509 start.go:138] virtualization: kvm guest
	I0103 19:26:19.482143   33509 out.go:177] * [multinode-484895] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:26:19.483634   33509 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:26:19.483658   33509 notify.go:220] Checking for updates...
	I0103 19:26:19.486553   33509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:26:19.488173   33509 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:26:19.489783   33509 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:26:19.491212   33509 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:26:19.492796   33509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:26:19.494742   33509 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:26:19.494831   33509 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:26:19.495256   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:26:19.495310   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:26:19.509455   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0103 19:26:19.509854   33509 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:26:19.510305   33509 main.go:141] libmachine: Using API Version  1
	I0103 19:26:19.510321   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:26:19.510631   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:26:19.510782   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:26:19.546723   33509 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 19:26:19.548291   33509 start.go:298] selected driver: kvm2
	I0103 19:26:19.548309   33509 start.go:902] validating driver "kvm2" against &{Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:26:19.548436   33509 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:26:19.548742   33509 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:26:19.548815   33509 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 19:26:19.563237   33509 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 19:26:19.563907   33509 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 19:26:19.563983   33509 cni.go:84] Creating CNI manager for ""
	I0103 19:26:19.564002   33509 cni.go:136] 3 nodes found, recommending kindnet
	I0103 19:26:19.564012   33509 start_flags.go:323] config:
	{Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:26:19.564284   33509 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:26:19.566324   33509 out.go:177] * Starting control plane node multinode-484895 in cluster multinode-484895
	I0103 19:26:19.567966   33509 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:26:19.568017   33509 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 19:26:19.568036   33509 cache.go:56] Caching tarball of preloaded images
	I0103 19:26:19.568112   33509 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:26:19.568122   33509 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:26:19.568230   33509 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:26:19.568416   33509 start.go:365] acquiring machines lock for multinode-484895: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:26:19.568459   33509 start.go:369] acquired machines lock for "multinode-484895" in 22.765µs
	I0103 19:26:19.568468   33509 start.go:96] Skipping create...Using existing machine configuration
	I0103 19:26:19.568473   33509 fix.go:54] fixHost starting: 
	I0103 19:26:19.568716   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:26:19.568750   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:26:19.582579   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0103 19:26:19.582935   33509 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:26:19.583408   33509 main.go:141] libmachine: Using API Version  1
	I0103 19:26:19.583429   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:26:19.583693   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:26:19.583874   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:26:19.583996   33509 main.go:141] libmachine: (multinode-484895) Calling .GetState
	I0103 19:26:19.585521   33509 fix.go:102] recreateIfNeeded on multinode-484895: state=Running err=<nil>
	W0103 19:26:19.585556   33509 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 19:26:19.587639   33509 out.go:177] * Updating the running kvm2 "multinode-484895" VM ...
	I0103 19:26:19.589382   33509 machine.go:88] provisioning docker machine ...
	I0103 19:26:19.589406   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:26:19.589633   33509 main.go:141] libmachine: (multinode-484895) Calling .GetMachineName
	I0103 19:26:19.589819   33509 buildroot.go:166] provisioning hostname "multinode-484895"
	I0103 19:26:19.589835   33509 main.go:141] libmachine: (multinode-484895) Calling .GetMachineName
	I0103 19:26:19.589961   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:26:19.592789   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:26:19.593234   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:26:19.593259   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:26:19.593347   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:26:19.593606   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:26:19.593766   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:26:19.593904   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:26:19.594073   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:26:19.594398   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:26:19.594412   33509 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484895 && echo "multinode-484895" | sudo tee /etc/hostname
	I0103 19:26:38.150830   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:26:44.230885   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:26:47.302841   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:26:53.382837   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:26:56.458810   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:02.534849   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:05.606790   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:11.686817   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:14.758752   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:20.838833   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:23.910783   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:29.990813   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:33.062817   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:39.142807   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:42.214895   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:48.294823   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:51.366814   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:27:57.446743   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:00.518859   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:06.598840   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:09.670768   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:15.750791   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:18.822868   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:24.902901   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:27.974933   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:34.054832   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:37.130786   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:43.206853   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:46.278782   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:52.358782   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:28:55.430813   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:01.510927   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:04.582850   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:10.662811   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:13.734761   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:19.814818   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:22.886770   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:28.966846   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:32.038878   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:38.118786   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:41.190785   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:47.270794   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:50.342864   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:56.422880   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:29:59.494769   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:05.574783   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:08.646887   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:14.726773   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:17.798773   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:23.878891   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:26.950852   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:33.030827   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:36.102831   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:42.182804   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:45.254868   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:51.334850   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:30:54.406781   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:31:00.486783   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:31:03.558776   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:31:09.638742   33509 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I0103 19:31:12.640862   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:31:12.640900   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:12.642773   33509 machine.go:91] provisioned docker machine in 4m53.053369408s
	I0103 19:31:12.642811   33509 fix.go:56] fixHost completed within 4m53.074337841s
	I0103 19:31:12.642816   33509 start.go:83] releasing machines lock for "multinode-484895", held for 4m53.074351285s
	W0103 19:31:12.642833   33509 start.go:694] error starting host: provision: host is not running
	W0103 19:31:12.642920   33509 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0103 19:31:12.642934   33509 start.go:709] Will try again in 5 seconds ...
	I0103 19:31:17.645157   33509 start.go:365] acquiring machines lock for multinode-484895: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:31:17.645255   33509 start.go:369] acquired machines lock for "multinode-484895" in 58.608µs
	I0103 19:31:17.645274   33509 start.go:96] Skipping create...Using existing machine configuration
	I0103 19:31:17.645280   33509 fix.go:54] fixHost starting: 
	I0103 19:31:17.645552   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:31:17.645572   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:31:17.660484   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0103 19:31:17.660911   33509 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:31:17.661349   33509 main.go:141] libmachine: Using API Version  1
	I0103 19:31:17.661370   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:31:17.661745   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:31:17.661953   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:31:17.662132   33509 main.go:141] libmachine: (multinode-484895) Calling .GetState
	I0103 19:31:17.664262   33509 fix.go:102] recreateIfNeeded on multinode-484895: state=Stopped err=<nil>
	I0103 19:31:17.664284   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	W0103 19:31:17.664518   33509 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 19:31:17.666665   33509 out.go:177] * Restarting existing kvm2 VM for "multinode-484895" ...
	I0103 19:31:17.668033   33509 main.go:141] libmachine: (multinode-484895) Calling .Start
	I0103 19:31:17.668215   33509 main.go:141] libmachine: (multinode-484895) Ensuring networks are active...
	I0103 19:31:17.669051   33509 main.go:141] libmachine: (multinode-484895) Ensuring network default is active
	I0103 19:31:17.669518   33509 main.go:141] libmachine: (multinode-484895) Ensuring network mk-multinode-484895 is active
	I0103 19:31:17.669950   33509 main.go:141] libmachine: (multinode-484895) Getting domain xml...
	I0103 19:31:17.671011   33509 main.go:141] libmachine: (multinode-484895) Creating domain...
	I0103 19:31:18.920276   33509 main.go:141] libmachine: (multinode-484895) Waiting to get IP...
	I0103 19:31:18.921154   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:18.921662   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:18.921742   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:18.921638   34265 retry.go:31] will retry after 302.182869ms: waiting for machine to come up
	I0103 19:31:19.225315   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:19.225845   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:19.225876   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:19.225797   34265 retry.go:31] will retry after 297.097865ms: waiting for machine to come up
	I0103 19:31:19.524356   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:19.524879   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:19.524911   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:19.524836   34265 retry.go:31] will retry after 370.962676ms: waiting for machine to come up
	I0103 19:31:19.897530   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:19.897904   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:19.897936   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:19.897858   34265 retry.go:31] will retry after 499.425337ms: waiting for machine to come up
	I0103 19:31:20.398422   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:20.398885   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:20.398918   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:20.398831   34265 retry.go:31] will retry after 529.909716ms: waiting for machine to come up
	I0103 19:31:20.930646   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:20.931038   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:20.931067   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:20.930988   34265 retry.go:31] will retry after 680.770555ms: waiting for machine to come up
	I0103 19:31:21.612915   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:21.613507   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:21.613561   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:21.613453   34265 retry.go:31] will retry after 729.746474ms: waiting for machine to come up
	I0103 19:31:22.344841   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:22.345317   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:22.345350   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:22.345277   34265 retry.go:31] will retry after 1.359888242s: waiting for machine to come up
	I0103 19:31:23.707109   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:23.707592   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:23.707622   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:23.707540   34265 retry.go:31] will retry after 1.591192586s: waiting for machine to come up
	I0103 19:31:25.301420   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:25.301917   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:25.301954   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:25.301849   34265 retry.go:31] will retry after 2.189444603s: waiting for machine to come up
	I0103 19:31:27.494182   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:27.494761   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:27.494800   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:27.494703   34265 retry.go:31] will retry after 2.018945869s: waiting for machine to come up
	I0103 19:31:29.516061   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:29.516490   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:29.516518   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:29.516447   34265 retry.go:31] will retry after 2.937289123s: waiting for machine to come up
	I0103 19:31:32.457670   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:32.458024   33509 main.go:141] libmachine: (multinode-484895) DBG | unable to find current IP address of domain multinode-484895 in network mk-multinode-484895
	I0103 19:31:32.458047   33509 main.go:141] libmachine: (multinode-484895) DBG | I0103 19:31:32.457993   34265 retry.go:31] will retry after 4.140839002s: waiting for machine to come up
	I0103 19:31:36.603687   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.604130   33509 main.go:141] libmachine: (multinode-484895) Found IP for machine: 192.168.39.191
	I0103 19:31:36.604159   33509 main.go:141] libmachine: (multinode-484895) Reserving static IP address...
	I0103 19:31:36.604172   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has current primary IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.604537   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "multinode-484895", mac: "52:54:00:28:f0:8c", ip: "192.168.39.191"} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:36.604578   33509 main.go:141] libmachine: (multinode-484895) Reserved static IP address: 192.168.39.191
	I0103 19:31:36.604601   33509 main.go:141] libmachine: (multinode-484895) DBG | skip adding static IP to network mk-multinode-484895 - found existing host DHCP lease matching {name: "multinode-484895", mac: "52:54:00:28:f0:8c", ip: "192.168.39.191"}
	I0103 19:31:36.604615   33509 main.go:141] libmachine: (multinode-484895) Waiting for SSH to be available...
	I0103 19:31:36.604625   33509 main.go:141] libmachine: (multinode-484895) DBG | Getting to WaitForSSH function...
	I0103 19:31:36.606479   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.606835   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:36.606873   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.607003   33509 main.go:141] libmachine: (multinode-484895) DBG | Using SSH client type: external
	I0103 19:31:36.607027   33509 main.go:141] libmachine: (multinode-484895) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa (-rw-------)
	I0103 19:31:36.607051   33509 main.go:141] libmachine: (multinode-484895) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 19:31:36.607064   33509 main.go:141] libmachine: (multinode-484895) DBG | About to run SSH command:
	I0103 19:31:36.607073   33509 main.go:141] libmachine: (multinode-484895) DBG | exit 0
	I0103 19:31:36.698023   33509 main.go:141] libmachine: (multinode-484895) DBG | SSH cmd err, output: <nil>: 
	I0103 19:31:36.698364   33509 main.go:141] libmachine: (multinode-484895) Calling .GetConfigRaw
	I0103 19:31:36.698982   33509 main.go:141] libmachine: (multinode-484895) Calling .GetIP
	I0103 19:31:36.701260   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.701593   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:36.701627   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.701832   33509 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:31:36.702017   33509 machine.go:88] provisioning docker machine ...
	I0103 19:31:36.702036   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:31:36.702224   33509 main.go:141] libmachine: (multinode-484895) Calling .GetMachineName
	I0103 19:31:36.702399   33509 buildroot.go:166] provisioning hostname "multinode-484895"
	I0103 19:31:36.702420   33509 main.go:141] libmachine: (multinode-484895) Calling .GetMachineName
	I0103 19:31:36.702584   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:36.704979   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.705383   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:36.705399   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.705572   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:31:36.705732   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:36.705851   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:36.706028   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:31:36.706193   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:31:36.706501   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:31:36.706514   33509 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484895 && echo "multinode-484895" | sudo tee /etc/hostname
	I0103 19:31:36.838440   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484895
	
	I0103 19:31:36.838466   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:36.841316   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.841690   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:36.841716   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.841847   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:31:36.842057   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:36.842243   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:36.842400   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:31:36.842558   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:31:36.842988   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:31:36.843013   33509 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484895/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:31:36.974249   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:31:36.974285   33509 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 19:31:36.974312   33509 buildroot.go:174] setting up certificates
	I0103 19:31:36.974325   33509 provision.go:83] configureAuth start
	I0103 19:31:36.974338   33509 main.go:141] libmachine: (multinode-484895) Calling .GetMachineName
	I0103 19:31:36.974658   33509 main.go:141] libmachine: (multinode-484895) Calling .GetIP
	I0103 19:31:36.977486   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.977940   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:36.977968   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.978131   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:36.980640   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.981036   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:36.981059   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:36.981182   33509 provision.go:138] copyHostCerts
	I0103 19:31:36.981218   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:31:36.981253   33509 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 19:31:36.981266   33509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:31:36.981332   33509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 19:31:36.981406   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:31:36.981425   33509 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 19:31:36.981432   33509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:31:36.981454   33509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 19:31:36.981493   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:31:36.981508   33509 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 19:31:36.981515   33509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:31:36.981536   33509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 19:31:36.981577   33509 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.multinode-484895 san=[192.168.39.191 192.168.39.191 localhost 127.0.0.1 minikube multinode-484895]
	I0103 19:31:37.095150   33509 provision.go:172] copyRemoteCerts
	I0103 19:31:37.095208   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:31:37.095231   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:37.097873   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.098326   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:37.098345   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.098540   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:31:37.098735   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:37.098877   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:31:37.099000   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:31:37.188503   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 19:31:37.188609   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:31:37.210500   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 19:31:37.210592   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0103 19:31:37.232402   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 19:31:37.232480   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 19:31:37.253578   33509 provision.go:86] duration metric: configureAuth took 279.239714ms
	I0103 19:31:37.253626   33509 buildroot.go:189] setting minikube options for container-runtime
	I0103 19:31:37.253898   33509 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:31:37.253986   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:37.256663   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.257148   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:37.257173   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.257400   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:31:37.257585   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:37.257747   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:37.257969   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:31:37.258159   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:31:37.258455   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:31:37.258470   33509 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:31:37.588699   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:31:37.588723   33509 machine.go:91] provisioned docker machine in 886.693113ms
	I0103 19:31:37.588732   33509 start.go:300] post-start starting for "multinode-484895" (driver="kvm2")
	I0103 19:31:37.588741   33509 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:31:37.588755   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:31:37.589038   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:31:37.589059   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:37.592250   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.592782   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:37.592804   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.593074   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:31:37.593332   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:37.593568   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:31:37.593726   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:31:37.688533   33509 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:31:37.692188   33509 command_runner.go:130] > NAME=Buildroot
	I0103 19:31:37.692212   33509 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0103 19:31:37.692218   33509 command_runner.go:130] > ID=buildroot
	I0103 19:31:37.692247   33509 command_runner.go:130] > VERSION_ID=2021.02.12
	I0103 19:31:37.692260   33509 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0103 19:31:37.692355   33509 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 19:31:37.692378   33509 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 19:31:37.692454   33509 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 19:31:37.692563   33509 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 19:31:37.692575   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0103 19:31:37.692675   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:31:37.701382   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:31:37.722296   33509 start.go:303] post-start completed in 133.550981ms
	I0103 19:31:37.722323   33509 fix.go:56] fixHost completed within 20.077040761s
	I0103 19:31:37.722347   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:37.724970   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.725352   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:37.725381   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.725529   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:31:37.725722   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:37.725955   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:37.726110   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:31:37.726294   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:31:37.726620   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I0103 19:31:37.726633   33509 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 19:31:37.847055   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704310297.802667077
	
	I0103 19:31:37.847073   33509 fix.go:206] guest clock: 1704310297.802667077
	I0103 19:31:37.847079   33509 fix.go:219] Guest: 2024-01-03 19:31:37.802667077 +0000 UTC Remote: 2024-01-03 19:31:37.722326798 +0000 UTC m=+318.292013162 (delta=80.340279ms)
	I0103 19:31:37.847115   33509 fix.go:190] guest clock delta is within tolerance: 80.340279ms
	I0103 19:31:37.847120   33509 start.go:83] releasing machines lock for "multinode-484895", held for 20.201857901s
	I0103 19:31:37.847140   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:31:37.847418   33509 main.go:141] libmachine: (multinode-484895) Calling .GetIP
	I0103 19:31:37.849869   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.850260   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:37.850291   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.850466   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:31:37.851125   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:31:37.851316   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:31:37.851396   33509 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:31:37.851445   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:37.851554   33509 ssh_runner.go:195] Run: cat /version.json
	I0103 19:31:37.851596   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:31:37.853914   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.854235   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:37.854288   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.854324   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.854365   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:31:37.854534   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:37.854690   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:31:37.854740   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:37.854784   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:37.854846   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:31:37.854935   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:31:37.855094   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:31:37.855234   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:31:37.855363   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:31:37.939162   33509 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0103 19:31:37.939339   33509 ssh_runner.go:195] Run: systemctl --version
	I0103 19:31:37.979722   33509 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0103 19:31:37.980669   33509 command_runner.go:130] > systemd 247 (247)
	I0103 19:31:37.980697   33509 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0103 19:31:37.980774   33509 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:31:38.120630   33509 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:31:38.126806   33509 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0103 19:31:38.126849   33509 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 19:31:38.126906   33509 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:31:38.140507   33509 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0103 19:31:38.140549   33509 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 19:31:38.140557   33509 start.go:475] detecting cgroup driver to use...
	I0103 19:31:38.140609   33509 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:31:38.153011   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:31:38.164525   33509 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:31:38.164577   33509 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:31:38.175957   33509 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:31:38.187555   33509 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:31:38.285641   33509 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0103 19:31:38.285768   33509 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:31:38.405514   33509 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0103 19:31:38.405555   33509 docker.go:219] disabling docker service ...
	I0103 19:31:38.405608   33509 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:31:38.418444   33509 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:31:38.429245   33509 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0103 19:31:38.429341   33509 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:31:38.539827   33509 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0103 19:31:38.539921   33509 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:31:38.551851   33509 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0103 19:31:38.551874   33509 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0103 19:31:38.648657   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:31:38.661006   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:31:38.676961   33509 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0103 19:31:38.677325   33509 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 19:31:38.677388   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:31:38.685985   33509 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:31:38.686061   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:31:38.694697   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:31:38.702823   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:31:38.711292   33509 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:31:38.720510   33509 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:31:38.728461   33509 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 19:31:38.728504   33509 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 19:31:38.728555   33509 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 19:31:38.740416   33509 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:31:38.749600   33509 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:31:38.861694   33509 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:31:39.017283   33509 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:31:39.017356   33509 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:31:39.025272   33509 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0103 19:31:39.025293   33509 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0103 19:31:39.025299   33509 command_runner.go:130] > Device: 16h/22d	Inode: 796         Links: 1
	I0103 19:31:39.025310   33509 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:31:39.025319   33509 command_runner.go:130] > Access: 2024-01-03 19:31:38.960982388 +0000
	I0103 19:31:39.025329   33509 command_runner.go:130] > Modify: 2024-01-03 19:31:38.960982388 +0000
	I0103 19:31:39.025338   33509 command_runner.go:130] > Change: 2024-01-03 19:31:38.960982388 +0000
	I0103 19:31:39.025344   33509 command_runner.go:130] >  Birth: -
	I0103 19:31:39.025362   33509 start.go:543] Will wait 60s for crictl version
	I0103 19:31:39.025406   33509 ssh_runner.go:195] Run: which crictl
	I0103 19:31:39.028842   33509 command_runner.go:130] > /usr/bin/crictl
	I0103 19:31:39.028911   33509 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:31:39.069285   33509 command_runner.go:130] > Version:  0.1.0
	I0103 19:31:39.069316   33509 command_runner.go:130] > RuntimeName:  cri-o
	I0103 19:31:39.069324   33509 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0103 19:31:39.069332   33509 command_runner.go:130] > RuntimeApiVersion:  v1
	I0103 19:31:39.069364   33509 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 19:31:39.069434   33509 ssh_runner.go:195] Run: crio --version
	I0103 19:31:39.115343   33509 command_runner.go:130] > crio version 1.24.1
	I0103 19:31:39.115370   33509 command_runner.go:130] > Version:          1.24.1
	I0103 19:31:39.115382   33509 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:31:39.115389   33509 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:31:39.115396   33509 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:31:39.115403   33509 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:31:39.115409   33509 command_runner.go:130] > Compiler:         gc
	I0103 19:31:39.115415   33509 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:31:39.115429   33509 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:31:39.115444   33509 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:31:39.115449   33509 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:31:39.115454   33509 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:31:39.115544   33509 ssh_runner.go:195] Run: crio --version
	I0103 19:31:39.160311   33509 command_runner.go:130] > crio version 1.24.1
	I0103 19:31:39.160334   33509 command_runner.go:130] > Version:          1.24.1
	I0103 19:31:39.160341   33509 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:31:39.160345   33509 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:31:39.160352   33509 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:31:39.160356   33509 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:31:39.160360   33509 command_runner.go:130] > Compiler:         gc
	I0103 19:31:39.160365   33509 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:31:39.160370   33509 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:31:39.160377   33509 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:31:39.160382   33509 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:31:39.160386   33509 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:31:39.164893   33509 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 19:31:39.166636   33509 main.go:141] libmachine: (multinode-484895) Calling .GetIP
	I0103 19:31:39.169073   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:39.169461   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:31:39.169494   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:31:39.169658   33509 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 19:31:39.173861   33509 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:31:39.185554   33509 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:31:39.185618   33509 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:31:39.221118   33509 command_runner.go:130] > {
	I0103 19:31:39.221143   33509 command_runner.go:130] >   "images": [
	I0103 19:31:39.221149   33509 command_runner.go:130] >     {
	I0103 19:31:39.221159   33509 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0103 19:31:39.221166   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:39.221174   33509 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0103 19:31:39.221199   33509 command_runner.go:130] >       ],
	I0103 19:31:39.221211   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:39.221226   33509 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0103 19:31:39.221242   33509 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0103 19:31:39.221252   33509 command_runner.go:130] >       ],
	I0103 19:31:39.221260   33509 command_runner.go:130] >       "size": "750414",
	I0103 19:31:39.221270   33509 command_runner.go:130] >       "uid": {
	I0103 19:31:39.221280   33509 command_runner.go:130] >         "value": "65535"
	I0103 19:31:39.221289   33509 command_runner.go:130] >       },
	I0103 19:31:39.221298   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:39.221329   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:39.221340   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:39.221346   33509 command_runner.go:130] >     }
	I0103 19:31:39.221352   33509 command_runner.go:130] >   ]
	I0103 19:31:39.221361   33509 command_runner.go:130] > }
	I0103 19:31:39.222709   33509 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 19:31:39.222791   33509 ssh_runner.go:195] Run: which lz4
	I0103 19:31:39.226801   33509 command_runner.go:130] > /usr/bin/lz4
	I0103 19:31:39.226834   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0103 19:31:39.226932   33509 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 19:31:39.231015   33509 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 19:31:39.231058   33509 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 19:31:39.231085   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 19:31:40.840116   33509 crio.go:444] Took 1.613221 seconds to copy over tarball
	I0103 19:31:40.840185   33509 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 19:31:43.509993   33509 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.669787418s)
	I0103 19:31:43.510018   33509 crio.go:451] Took 2.669876 seconds to extract the tarball
	I0103 19:31:43.510026   33509 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 19:31:43.550443   33509 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:31:43.593000   33509 command_runner.go:130] > {
	I0103 19:31:43.593022   33509 command_runner.go:130] >   "images": [
	I0103 19:31:43.593028   33509 command_runner.go:130] >     {
	I0103 19:31:43.593041   33509 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0103 19:31:43.593048   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:43.593055   33509 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0103 19:31:43.593060   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593067   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:43.593085   33509 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0103 19:31:43.593102   33509 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0103 19:31:43.593110   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593118   33509 command_runner.go:130] >       "size": "65258016",
	I0103 19:31:43.593129   33509 command_runner.go:130] >       "uid": null,
	I0103 19:31:43.593135   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:43.593144   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:43.593155   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:43.593163   33509 command_runner.go:130] >     },
	I0103 19:31:43.593169   33509 command_runner.go:130] >     {
	I0103 19:31:43.593180   33509 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0103 19:31:43.593189   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:43.593199   33509 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0103 19:31:43.593208   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593216   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:43.593230   33509 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0103 19:31:43.593249   33509 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0103 19:31:43.593259   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593274   33509 command_runner.go:130] >       "size": "31470524",
	I0103 19:31:43.593284   33509 command_runner.go:130] >       "uid": null,
	I0103 19:31:43.593294   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:43.593304   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:43.593312   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:43.593321   33509 command_runner.go:130] >     },
	I0103 19:31:43.593330   33509 command_runner.go:130] >     {
	I0103 19:31:43.593344   33509 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0103 19:31:43.593354   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:43.593365   33509 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0103 19:31:43.593374   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593383   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:43.593400   33509 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0103 19:31:43.593416   33509 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0103 19:31:43.593426   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593436   33509 command_runner.go:130] >       "size": "53621675",
	I0103 19:31:43.593447   33509 command_runner.go:130] >       "uid": null,
	I0103 19:31:43.593455   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:43.593469   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:43.593495   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:43.593504   33509 command_runner.go:130] >     },
	I0103 19:31:43.593511   33509 command_runner.go:130] >     {
	I0103 19:31:43.593525   33509 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0103 19:31:43.593541   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:43.593553   33509 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0103 19:31:43.593563   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593572   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:43.593587   33509 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0103 19:31:43.593602   33509 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0103 19:31:43.593620   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593637   33509 command_runner.go:130] >       "size": "295456551",
	I0103 19:31:43.593647   33509 command_runner.go:130] >       "uid": {
	I0103 19:31:43.593659   33509 command_runner.go:130] >         "value": "0"
	I0103 19:31:43.593668   33509 command_runner.go:130] >       },
	I0103 19:31:43.593676   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:43.593687   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:43.593702   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:43.593711   33509 command_runner.go:130] >     },
	I0103 19:31:43.593718   33509 command_runner.go:130] >     {
	I0103 19:31:43.593729   33509 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0103 19:31:43.593739   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:43.593752   33509 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0103 19:31:43.593761   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593769   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:43.593785   33509 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0103 19:31:43.593802   33509 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0103 19:31:43.593811   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593819   33509 command_runner.go:130] >       "size": "127226832",
	I0103 19:31:43.593829   33509 command_runner.go:130] >       "uid": {
	I0103 19:31:43.593840   33509 command_runner.go:130] >         "value": "0"
	I0103 19:31:43.593847   33509 command_runner.go:130] >       },
	I0103 19:31:43.593858   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:43.593866   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:43.593877   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:43.593890   33509 command_runner.go:130] >     },
	I0103 19:31:43.593900   33509 command_runner.go:130] >     {
	I0103 19:31:43.593913   33509 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0103 19:31:43.593923   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:43.593935   33509 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0103 19:31:43.593942   33509 command_runner.go:130] >       ],
	I0103 19:31:43.593953   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:43.593968   33509 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0103 19:31:43.593984   33509 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0103 19:31:43.593993   33509 command_runner.go:130] >       ],
	I0103 19:31:43.594002   33509 command_runner.go:130] >       "size": "123261750",
	I0103 19:31:43.594010   33509 command_runner.go:130] >       "uid": {
	I0103 19:31:43.594021   33509 command_runner.go:130] >         "value": "0"
	I0103 19:31:43.594028   33509 command_runner.go:130] >       },
	I0103 19:31:43.594039   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:43.594047   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:43.594057   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:43.594066   33509 command_runner.go:130] >     },
	I0103 19:31:43.594076   33509 command_runner.go:130] >     {
	I0103 19:31:43.594090   33509 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0103 19:31:43.594101   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:43.594112   33509 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0103 19:31:43.594119   33509 command_runner.go:130] >       ],
	I0103 19:31:43.594129   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:43.594148   33509 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0103 19:31:43.594164   33509 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0103 19:31:43.594173   33509 command_runner.go:130] >       ],
	I0103 19:31:43.594181   33509 command_runner.go:130] >       "size": "74749335",
	I0103 19:31:43.594192   33509 command_runner.go:130] >       "uid": null,
	I0103 19:31:43.594203   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:43.594213   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:43.594220   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:43.594229   33509 command_runner.go:130] >     },
	I0103 19:31:43.594236   33509 command_runner.go:130] >     {
	I0103 19:31:43.594250   33509 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0103 19:31:43.594259   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:43.594276   33509 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0103 19:31:43.594286   33509 command_runner.go:130] >       ],
	I0103 19:31:43.594295   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:43.594327   33509 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0103 19:31:43.594344   33509 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0103 19:31:43.594350   33509 command_runner.go:130] >       ],
	I0103 19:31:43.594358   33509 command_runner.go:130] >       "size": "61551410",
	I0103 19:31:43.594369   33509 command_runner.go:130] >       "uid": {
	I0103 19:31:43.594379   33509 command_runner.go:130] >         "value": "0"
	I0103 19:31:43.594387   33509 command_runner.go:130] >       },
	I0103 19:31:43.594397   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:43.594406   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:43.594416   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:43.594424   33509 command_runner.go:130] >     },
	I0103 19:31:43.594431   33509 command_runner.go:130] >     {
	I0103 19:31:43.594445   33509 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0103 19:31:43.594456   33509 command_runner.go:130] >       "repoTags": [
	I0103 19:31:43.594465   33509 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0103 19:31:43.594478   33509 command_runner.go:130] >       ],
	I0103 19:31:43.594490   33509 command_runner.go:130] >       "repoDigests": [
	I0103 19:31:43.594505   33509 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0103 19:31:43.594535   33509 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0103 19:31:43.594542   33509 command_runner.go:130] >       ],
	I0103 19:31:43.594550   33509 command_runner.go:130] >       "size": "750414",
	I0103 19:31:43.594560   33509 command_runner.go:130] >       "uid": {
	I0103 19:31:43.594570   33509 command_runner.go:130] >         "value": "65535"
	I0103 19:31:43.594579   33509 command_runner.go:130] >       },
	I0103 19:31:43.594589   33509 command_runner.go:130] >       "username": "",
	I0103 19:31:43.594599   33509 command_runner.go:130] >       "spec": null,
	I0103 19:31:43.594609   33509 command_runner.go:130] >       "pinned": false
	I0103 19:31:43.594615   33509 command_runner.go:130] >     }
	I0103 19:31:43.594623   33509 command_runner.go:130] >   ]
	I0103 19:31:43.594637   33509 command_runner.go:130] > }
	I0103 19:31:43.594767   33509 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 19:31:43.594780   33509 cache_images.go:84] Images are preloaded, skipping loading
	I0103 19:31:43.594857   33509 ssh_runner.go:195] Run: crio config
	I0103 19:31:43.644774   33509 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0103 19:31:43.644805   33509 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0103 19:31:43.644815   33509 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0103 19:31:43.644820   33509 command_runner.go:130] > #
	I0103 19:31:43.644830   33509 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0103 19:31:43.644841   33509 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0103 19:31:43.644851   33509 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0103 19:31:43.644861   33509 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0103 19:31:43.644868   33509 command_runner.go:130] > # reload'.
	I0103 19:31:43.644885   33509 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0103 19:31:43.644896   33509 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0103 19:31:43.644905   33509 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0103 19:31:43.644920   33509 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0103 19:31:43.644923   33509 command_runner.go:130] > [crio]
	I0103 19:31:43.644929   33509 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0103 19:31:43.644936   33509 command_runner.go:130] > # containers images, in this directory.
	I0103 19:31:43.644944   33509 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0103 19:31:43.644976   33509 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0103 19:31:43.644990   33509 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0103 19:31:43.645005   33509 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0103 19:31:43.645017   33509 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0103 19:31:43.645059   33509 command_runner.go:130] > storage_driver = "overlay"
	I0103 19:31:43.645077   33509 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0103 19:31:43.645087   33509 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0103 19:31:43.645095   33509 command_runner.go:130] > storage_option = [
	I0103 19:31:43.645228   33509 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0103 19:31:43.645245   33509 command_runner.go:130] > ]
	I0103 19:31:43.645256   33509 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0103 19:31:43.645271   33509 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0103 19:31:43.645563   33509 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0103 19:31:43.645577   33509 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0103 19:31:43.645587   33509 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0103 19:31:43.645595   33509 command_runner.go:130] > # always happen on a node reboot
	I0103 19:31:43.645877   33509 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0103 19:31:43.645897   33509 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0103 19:31:43.645908   33509 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0103 19:31:43.645927   33509 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0103 19:31:43.646161   33509 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0103 19:31:43.646179   33509 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0103 19:31:43.646192   33509 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0103 19:31:43.646501   33509 command_runner.go:130] > # internal_wipe = true
	I0103 19:31:43.646514   33509 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0103 19:31:43.646540   33509 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0103 19:31:43.646550   33509 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0103 19:31:43.646828   33509 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0103 19:31:43.646841   33509 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0103 19:31:43.646848   33509 command_runner.go:130] > [crio.api]
	I0103 19:31:43.646856   33509 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0103 19:31:43.647111   33509 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0103 19:31:43.647123   33509 command_runner.go:130] > # IP address on which the stream server will listen.
	I0103 19:31:43.647381   33509 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0103 19:31:43.647396   33509 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0103 19:31:43.647404   33509 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0103 19:31:43.647695   33509 command_runner.go:130] > # stream_port = "0"
	I0103 19:31:43.647708   33509 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0103 19:31:43.647976   33509 command_runner.go:130] > # stream_enable_tls = false
	I0103 19:31:43.647990   33509 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0103 19:31:43.648189   33509 command_runner.go:130] > # stream_idle_timeout = ""
	I0103 19:31:43.648202   33509 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0103 19:31:43.648213   33509 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0103 19:31:43.648221   33509 command_runner.go:130] > # minutes.
	I0103 19:31:43.648465   33509 command_runner.go:130] > # stream_tls_cert = ""
	I0103 19:31:43.648479   33509 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0103 19:31:43.648492   33509 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0103 19:31:43.648671   33509 command_runner.go:130] > # stream_tls_key = ""
	I0103 19:31:43.648707   33509 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0103 19:31:43.648722   33509 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0103 19:31:43.648735   33509 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0103 19:31:43.649195   33509 command_runner.go:130] > # stream_tls_ca = ""
	I0103 19:31:43.649211   33509 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:31:43.649636   33509 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0103 19:31:43.649654   33509 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:31:43.649864   33509 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0103 19:31:43.649903   33509 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0103 19:31:43.649918   33509 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0103 19:31:43.649925   33509 command_runner.go:130] > [crio.runtime]
	I0103 19:31:43.649937   33509 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0103 19:31:43.649951   33509 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0103 19:31:43.649960   33509 command_runner.go:130] > # "nofile=1024:2048"
	I0103 19:31:43.649971   33509 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0103 19:31:43.650129   33509 command_runner.go:130] > # default_ulimits = [
	I0103 19:31:43.650384   33509 command_runner.go:130] > # ]
	I0103 19:31:43.650404   33509 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0103 19:31:43.651028   33509 command_runner.go:130] > # no_pivot = false
	I0103 19:31:43.651047   33509 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0103 19:31:43.651058   33509 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0103 19:31:43.651791   33509 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0103 19:31:43.651811   33509 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0103 19:31:43.651818   33509 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0103 19:31:43.651826   33509 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:31:43.652077   33509 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0103 19:31:43.652090   33509 command_runner.go:130] > # Cgroup setting for conmon
	I0103 19:31:43.652102   33509 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0103 19:31:43.652355   33509 command_runner.go:130] > conmon_cgroup = "pod"
	I0103 19:31:43.652372   33509 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0103 19:31:43.652382   33509 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0103 19:31:43.652396   33509 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:31:43.652406   33509 command_runner.go:130] > conmon_env = [
	I0103 19:31:43.652684   33509 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0103 19:31:43.652696   33509 command_runner.go:130] > ]
	I0103 19:31:43.652702   33509 command_runner.go:130] > # Additional environment variables to set for all the
	I0103 19:31:43.652707   33509 command_runner.go:130] > # containers. These are overridden if set in the
	I0103 19:31:43.652714   33509 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0103 19:31:43.652846   33509 command_runner.go:130] > # default_env = [
	I0103 19:31:43.652857   33509 command_runner.go:130] > # ]
	I0103 19:31:43.652865   33509 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0103 19:31:43.652869   33509 command_runner.go:130] > # selinux = false
	I0103 19:31:43.652875   33509 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0103 19:31:43.652884   33509 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0103 19:31:43.652891   33509 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0103 19:31:43.652898   33509 command_runner.go:130] > # seccomp_profile = ""
	I0103 19:31:43.652908   33509 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0103 19:31:43.652918   33509 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0103 19:31:43.652932   33509 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0103 19:31:43.652944   33509 command_runner.go:130] > # which might increase security.
	I0103 19:31:43.652955   33509 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0103 19:31:43.652966   33509 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0103 19:31:43.652976   33509 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0103 19:31:43.652982   33509 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0103 19:31:43.652991   33509 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0103 19:31:43.653004   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:31:43.653013   33509 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0103 19:31:43.653027   33509 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0103 19:31:43.653037   33509 command_runner.go:130] > # the cgroup blockio controller.
	I0103 19:31:43.653048   33509 command_runner.go:130] > # blockio_config_file = ""
	I0103 19:31:43.653060   33509 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0103 19:31:43.653067   33509 command_runner.go:130] > # irqbalance daemon.
	I0103 19:31:43.653072   33509 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0103 19:31:43.653084   33509 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0103 19:31:43.653096   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:31:43.653106   33509 command_runner.go:130] > # rdt_config_file = ""
	I0103 19:31:43.653120   33509 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0103 19:31:43.653138   33509 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0103 19:31:43.653152   33509 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0103 19:31:43.653160   33509 command_runner.go:130] > # separate_pull_cgroup = ""
	I0103 19:31:43.653171   33509 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0103 19:31:43.653185   33509 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0103 19:31:43.653196   33509 command_runner.go:130] > # will be added.
	I0103 19:31:43.653204   33509 command_runner.go:130] > # default_capabilities = [
	I0103 19:31:43.653213   33509 command_runner.go:130] > # 	"CHOWN",
	I0103 19:31:43.653220   33509 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0103 19:31:43.653230   33509 command_runner.go:130] > # 	"FSETID",
	I0103 19:31:43.653237   33509 command_runner.go:130] > # 	"FOWNER",
	I0103 19:31:43.653244   33509 command_runner.go:130] > # 	"SETGID",
	I0103 19:31:43.653250   33509 command_runner.go:130] > # 	"SETUID",
	I0103 19:31:43.653260   33509 command_runner.go:130] > # 	"SETPCAP",
	I0103 19:31:43.653268   33509 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0103 19:31:43.653277   33509 command_runner.go:130] > # 	"KILL",
	I0103 19:31:43.653283   33509 command_runner.go:130] > # ]
	I0103 19:31:43.653295   33509 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0103 19:31:43.653308   33509 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:31:43.653317   33509 command_runner.go:130] > # default_sysctls = [
	I0103 19:31:43.653322   33509 command_runner.go:130] > # ]
	I0103 19:31:43.653327   33509 command_runner.go:130] > # List of devices on the host that a
	I0103 19:31:43.653338   33509 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0103 19:31:43.653350   33509 command_runner.go:130] > # allowed_devices = [
	I0103 19:31:43.653361   33509 command_runner.go:130] > # 	"/dev/fuse",
	I0103 19:31:43.653371   33509 command_runner.go:130] > # ]
	I0103 19:31:43.653380   33509 command_runner.go:130] > # List of additional devices. specified as
	I0103 19:31:43.653395   33509 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0103 19:31:43.653407   33509 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0103 19:31:43.653432   33509 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:31:43.653443   33509 command_runner.go:130] > # additional_devices = [
	I0103 19:31:43.653449   33509 command_runner.go:130] > # ]
	I0103 19:31:43.653461   33509 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0103 19:31:43.653471   33509 command_runner.go:130] > # cdi_spec_dirs = [
	I0103 19:31:43.653488   33509 command_runner.go:130] > # 	"/etc/cdi",
	I0103 19:31:43.653496   33509 command_runner.go:130] > # 	"/var/run/cdi",
	I0103 19:31:43.653503   33509 command_runner.go:130] > # ]
	I0103 19:31:43.653513   33509 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0103 19:31:43.653523   33509 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0103 19:31:43.653530   33509 command_runner.go:130] > # Defaults to false.
	I0103 19:31:43.653542   33509 command_runner.go:130] > # device_ownership_from_security_context = false
	I0103 19:31:43.653553   33509 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0103 19:31:43.653566   33509 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0103 19:31:43.653576   33509 command_runner.go:130] > # hooks_dir = [
	I0103 19:31:43.653584   33509 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0103 19:31:43.653593   33509 command_runner.go:130] > # ]
	I0103 19:31:43.653604   33509 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0103 19:31:43.653614   33509 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0103 19:31:43.653621   33509 command_runner.go:130] > # its default mounts from the following two files:
	I0103 19:31:43.653629   33509 command_runner.go:130] > #
	I0103 19:31:43.653640   33509 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0103 19:31:43.653654   33509 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0103 19:31:43.653667   33509 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0103 19:31:43.653675   33509 command_runner.go:130] > #
	I0103 19:31:43.653685   33509 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0103 19:31:43.653698   33509 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0103 19:31:43.653715   33509 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0103 19:31:43.653724   33509 command_runner.go:130] > #      only add mounts it finds in this file.
	I0103 19:31:43.653729   33509 command_runner.go:130] > #
	I0103 19:31:43.653739   33509 command_runner.go:130] > # default_mounts_file = ""
	I0103 19:31:43.653752   33509 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0103 19:31:43.653767   33509 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0103 19:31:43.653775   33509 command_runner.go:130] > pids_limit = 1024
	I0103 19:31:43.653789   33509 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0103 19:31:43.653803   33509 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0103 19:31:43.653818   33509 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0103 19:31:43.653836   33509 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0103 19:31:43.653847   33509 command_runner.go:130] > # log_size_max = -1
	I0103 19:31:43.653863   33509 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0103 19:31:43.653874   33509 command_runner.go:130] > # log_to_journald = false
	I0103 19:31:43.653888   33509 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0103 19:31:43.653899   33509 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0103 19:31:43.653909   33509 command_runner.go:130] > # Path to directory for container attach sockets.
	I0103 19:31:43.653922   33509 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0103 19:31:43.653931   33509 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0103 19:31:43.653942   33509 command_runner.go:130] > # bind_mount_prefix = ""
	I0103 19:31:43.653956   33509 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0103 19:31:43.653968   33509 command_runner.go:130] > # read_only = false
	I0103 19:31:43.653980   33509 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0103 19:31:43.653993   33509 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0103 19:31:43.654002   33509 command_runner.go:130] > # live configuration reload.
	I0103 19:31:43.654013   33509 command_runner.go:130] > # log_level = "info"
	I0103 19:31:43.654026   33509 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0103 19:31:43.654039   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:31:43.654049   33509 command_runner.go:130] > # log_filter = ""
	I0103 19:31:43.654063   33509 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0103 19:31:43.654075   33509 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0103 19:31:43.654086   33509 command_runner.go:130] > # separated by comma.
	I0103 19:31:43.654093   33509 command_runner.go:130] > # uid_mappings = ""
	I0103 19:31:43.654104   33509 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0103 19:31:43.654124   33509 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0103 19:31:43.654134   33509 command_runner.go:130] > # separated by comma.
	I0103 19:31:43.654141   33509 command_runner.go:130] > # gid_mappings = ""
	I0103 19:31:43.654151   33509 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0103 19:31:43.654161   33509 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:31:43.654177   33509 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:31:43.654192   33509 command_runner.go:130] > # minimum_mappable_uid = -1
	I0103 19:31:43.654203   33509 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0103 19:31:43.654217   33509 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:31:43.654228   33509 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:31:43.654239   33509 command_runner.go:130] > # minimum_mappable_gid = -1
	I0103 19:31:43.654250   33509 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0103 19:31:43.654263   33509 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0103 19:31:43.654273   33509 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0103 19:31:43.654284   33509 command_runner.go:130] > # ctr_stop_timeout = 30
	I0103 19:31:43.654294   33509 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0103 19:31:43.654306   33509 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0103 19:31:43.654315   33509 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0103 19:31:43.654321   33509 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0103 19:31:43.654329   33509 command_runner.go:130] > drop_infra_ctr = false
	I0103 19:31:43.654341   33509 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0103 19:31:43.654354   33509 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0103 19:31:43.654367   33509 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0103 19:31:43.654379   33509 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0103 19:31:43.654393   33509 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0103 19:31:43.654405   33509 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0103 19:31:43.654415   33509 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0103 19:31:43.654428   33509 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0103 19:31:43.654441   33509 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0103 19:31:43.654455   33509 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0103 19:31:43.654470   33509 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0103 19:31:43.654485   33509 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0103 19:31:43.654496   33509 command_runner.go:130] > # default_runtime = "runc"
	I0103 19:31:43.654527   33509 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0103 19:31:43.654544   33509 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0103 19:31:43.654563   33509 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0103 19:31:43.654575   33509 command_runner.go:130] > # creation as a file is not desired either.
	I0103 19:31:43.654592   33509 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0103 19:31:43.654604   33509 command_runner.go:130] > # the hostname is being managed dynamically.
	I0103 19:31:43.654615   33509 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0103 19:31:43.654620   33509 command_runner.go:130] > # ]
	I0103 19:31:43.654632   33509 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0103 19:31:43.654647   33509 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0103 19:31:43.654662   33509 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0103 19:31:43.654674   33509 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0103 19:31:43.654683   33509 command_runner.go:130] > #
	I0103 19:31:43.654692   33509 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0103 19:31:43.654704   33509 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0103 19:31:43.654715   33509 command_runner.go:130] > #  runtime_type = "oci"
	I0103 19:31:43.654725   33509 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0103 19:31:43.654737   33509 command_runner.go:130] > #  privileged_without_host_devices = false
	I0103 19:31:43.654748   33509 command_runner.go:130] > #  allowed_annotations = []
	I0103 19:31:43.654755   33509 command_runner.go:130] > # Where:
	I0103 19:31:43.654765   33509 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0103 19:31:43.654779   33509 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0103 19:31:43.654794   33509 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0103 19:31:43.654808   33509 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0103 19:31:43.654818   33509 command_runner.go:130] > #   in $PATH.
	I0103 19:31:43.654832   33509 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0103 19:31:43.654844   33509 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0103 19:31:43.654858   33509 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0103 19:31:43.654868   33509 command_runner.go:130] > #   state.
	I0103 19:31:43.654880   33509 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0103 19:31:43.654893   33509 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0103 19:31:43.654907   33509 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0103 19:31:43.654920   33509 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0103 19:31:43.654934   33509 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0103 19:31:43.654948   33509 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0103 19:31:43.654959   33509 command_runner.go:130] > #   The currently recognized values are:
	I0103 19:31:43.654975   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0103 19:31:43.654990   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0103 19:31:43.655004   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0103 19:31:43.655018   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0103 19:31:43.655034   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0103 19:31:43.655050   33509 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0103 19:31:43.655065   33509 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0103 19:31:43.655080   33509 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0103 19:31:43.655093   33509 command_runner.go:130] > #   should be moved to the container's cgroup
	I0103 19:31:43.655105   33509 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0103 19:31:43.655120   33509 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0103 19:31:43.655130   33509 command_runner.go:130] > runtime_type = "oci"
	I0103 19:31:43.655141   33509 command_runner.go:130] > runtime_root = "/run/runc"
	I0103 19:31:43.655151   33509 command_runner.go:130] > runtime_config_path = ""
	I0103 19:31:43.655161   33509 command_runner.go:130] > monitor_path = ""
	I0103 19:31:43.655171   33509 command_runner.go:130] > monitor_cgroup = ""
	I0103 19:31:43.655180   33509 command_runner.go:130] > monitor_exec_cgroup = ""
	I0103 19:31:43.655194   33509 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0103 19:31:43.655205   33509 command_runner.go:130] > # running containers
	I0103 19:31:43.655214   33509 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0103 19:31:43.655229   33509 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0103 19:31:43.655303   33509 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0103 19:31:43.655315   33509 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0103 19:31:43.655324   33509 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0103 19:31:43.655334   33509 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0103 19:31:43.655346   33509 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0103 19:31:43.655357   33509 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0103 19:31:43.655372   33509 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0103 19:31:43.655384   33509 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0103 19:31:43.655399   33509 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0103 19:31:43.655411   33509 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0103 19:31:43.655426   33509 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0103 19:31:43.655442   33509 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0103 19:31:43.655458   33509 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0103 19:31:43.655471   33509 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0103 19:31:43.655490   33509 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0103 19:31:43.655510   33509 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0103 19:31:43.655523   33509 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0103 19:31:43.655539   33509 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0103 19:31:43.655549   33509 command_runner.go:130] > # Example:
	I0103 19:31:43.655560   33509 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0103 19:31:43.655572   33509 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0103 19:31:43.655584   33509 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0103 19:31:43.655597   33509 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0103 19:31:43.655611   33509 command_runner.go:130] > # cpuset = 0
	I0103 19:31:43.655622   33509 command_runner.go:130] > # cpushares = "0-1"
	I0103 19:31:43.655630   33509 command_runner.go:130] > # Where:
	I0103 19:31:43.655638   33509 command_runner.go:130] > # The workload name is workload-type.
	I0103 19:31:43.655654   33509 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0103 19:31:43.655667   33509 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0103 19:31:43.655681   33509 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0103 19:31:43.655697   33509 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0103 19:31:43.655711   33509 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0103 19:31:43.655720   33509 command_runner.go:130] > # 
	I0103 19:31:43.655732   33509 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0103 19:31:43.655740   33509 command_runner.go:130] > #
	I0103 19:31:43.655751   33509 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0103 19:31:43.655765   33509 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0103 19:31:43.655780   33509 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0103 19:31:43.655794   33509 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0103 19:31:43.655807   33509 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0103 19:31:43.655817   33509 command_runner.go:130] > [crio.image]
	I0103 19:31:43.655833   33509 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0103 19:31:43.655845   33509 command_runner.go:130] > # default_transport = "docker://"
	I0103 19:31:43.655856   33509 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0103 19:31:43.655871   33509 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:31:43.655882   33509 command_runner.go:130] > # global_auth_file = ""
	I0103 19:31:43.655893   33509 command_runner.go:130] > # The image used to instantiate infra containers.
	I0103 19:31:43.655905   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:31:43.655917   33509 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0103 19:31:43.655932   33509 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0103 19:31:43.655945   33509 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:31:43.655959   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:31:43.655970   33509 command_runner.go:130] > # pause_image_auth_file = ""
	I0103 19:31:43.655984   33509 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0103 19:31:43.655998   33509 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0103 19:31:43.656012   33509 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0103 19:31:43.656027   33509 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0103 19:31:43.656038   33509 command_runner.go:130] > # pause_command = "/pause"
	I0103 19:31:43.656052   33509 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0103 19:31:43.656071   33509 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0103 19:31:43.656085   33509 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0103 19:31:43.656099   33509 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0103 19:31:43.656114   33509 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0103 19:31:43.656120   33509 command_runner.go:130] > # signature_policy = ""
	I0103 19:31:43.656129   33509 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0103 19:31:43.656139   33509 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0103 19:31:43.656146   33509 command_runner.go:130] > # changing them here.
	I0103 19:31:43.656153   33509 command_runner.go:130] > # insecure_registries = [
	I0103 19:31:43.656159   33509 command_runner.go:130] > # ]
	I0103 19:31:43.656170   33509 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0103 19:31:43.656178   33509 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0103 19:31:43.656185   33509 command_runner.go:130] > # image_volumes = "mkdir"
	I0103 19:31:43.656194   33509 command_runner.go:130] > # Temporary directory to use for storing big files
	I0103 19:31:43.656202   33509 command_runner.go:130] > # big_files_temporary_dir = ""
	I0103 19:31:43.656212   33509 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0103 19:31:43.656219   33509 command_runner.go:130] > # CNI plugins.
	I0103 19:31:43.656226   33509 command_runner.go:130] > [crio.network]
	I0103 19:31:43.656239   33509 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0103 19:31:43.656247   33509 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0103 19:31:43.656255   33509 command_runner.go:130] > # cni_default_network = ""
	I0103 19:31:43.656267   33509 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0103 19:31:43.656275   33509 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0103 19:31:43.656284   33509 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0103 19:31:43.656291   33509 command_runner.go:130] > # plugin_dirs = [
	I0103 19:31:43.656298   33509 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0103 19:31:43.656305   33509 command_runner.go:130] > # ]
	I0103 19:31:43.656314   33509 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0103 19:31:43.656325   33509 command_runner.go:130] > [crio.metrics]
	I0103 19:31:43.656334   33509 command_runner.go:130] > # Globally enable or disable metrics support.
	I0103 19:31:43.656345   33509 command_runner.go:130] > enable_metrics = true
	I0103 19:31:43.656357   33509 command_runner.go:130] > # Specify enabled metrics collectors.
	I0103 19:31:43.656369   33509 command_runner.go:130] > # Per default all metrics are enabled.
	I0103 19:31:43.656383   33509 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0103 19:31:43.656394   33509 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0103 19:31:43.656408   33509 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0103 19:31:43.656423   33509 command_runner.go:130] > # metrics_collectors = [
	I0103 19:31:43.656433   33509 command_runner.go:130] > # 	"operations",
	I0103 19:31:43.656446   33509 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0103 19:31:43.656458   33509 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0103 19:31:43.656469   33509 command_runner.go:130] > # 	"operations_errors",
	I0103 19:31:43.656478   33509 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0103 19:31:43.656488   33509 command_runner.go:130] > # 	"image_pulls_by_name",
	I0103 19:31:43.656498   33509 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0103 19:31:43.656509   33509 command_runner.go:130] > # 	"image_pulls_failures",
	I0103 19:31:43.656520   33509 command_runner.go:130] > # 	"image_pulls_successes",
	I0103 19:31:43.656529   33509 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0103 19:31:43.656539   33509 command_runner.go:130] > # 	"image_layer_reuse",
	I0103 19:31:43.656548   33509 command_runner.go:130] > # 	"containers_oom_total",
	I0103 19:31:43.656559   33509 command_runner.go:130] > # 	"containers_oom",
	I0103 19:31:43.656567   33509 command_runner.go:130] > # 	"processes_defunct",
	I0103 19:31:43.656578   33509 command_runner.go:130] > # 	"operations_total",
	I0103 19:31:43.656590   33509 command_runner.go:130] > # 	"operations_latency_seconds",
	I0103 19:31:43.656602   33509 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0103 19:31:43.656615   33509 command_runner.go:130] > # 	"operations_errors_total",
	I0103 19:31:43.656627   33509 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0103 19:31:43.656639   33509 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0103 19:31:43.656650   33509 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0103 19:31:43.656660   33509 command_runner.go:130] > # 	"image_pulls_success_total",
	I0103 19:31:43.656669   33509 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0103 19:31:43.656680   33509 command_runner.go:130] > # 	"containers_oom_count_total",
	I0103 19:31:43.656687   33509 command_runner.go:130] > # ]
	I0103 19:31:43.656700   33509 command_runner.go:130] > # The port on which the metrics server will listen.
	I0103 19:31:43.656711   33509 command_runner.go:130] > # metrics_port = 9090
	I0103 19:31:43.656721   33509 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0103 19:31:43.656732   33509 command_runner.go:130] > # metrics_socket = ""
	I0103 19:31:43.656742   33509 command_runner.go:130] > # The certificate for the secure metrics server.
	I0103 19:31:43.656755   33509 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0103 19:31:43.656769   33509 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0103 19:31:43.656781   33509 command_runner.go:130] > # certificate on any modification event.
	I0103 19:31:43.656789   33509 command_runner.go:130] > # metrics_cert = ""
	I0103 19:31:43.656802   33509 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0103 19:31:43.656818   33509 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0103 19:31:43.656828   33509 command_runner.go:130] > # metrics_key = ""
	I0103 19:31:43.656842   33509 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0103 19:31:43.656853   33509 command_runner.go:130] > [crio.tracing]
	I0103 19:31:43.656869   33509 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0103 19:31:43.656879   33509 command_runner.go:130] > # enable_tracing = false
	I0103 19:31:43.656889   33509 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0103 19:31:43.656901   33509 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0103 19:31:43.656913   33509 command_runner.go:130] > # Number of samples to collect per million spans.
	I0103 19:31:43.656925   33509 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0103 19:31:43.656939   33509 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0103 19:31:43.656949   33509 command_runner.go:130] > [crio.stats]
	I0103 19:31:43.656963   33509 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0103 19:31:43.656976   33509 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0103 19:31:43.656988   33509 command_runner.go:130] > # stats_collection_period = 0
	I0103 19:31:43.657023   33509 command_runner.go:130] ! time="2024-01-03 19:31:43.598322073Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0103 19:31:43.657043   33509 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0103 19:31:43.657145   33509 cni.go:84] Creating CNI manager for ""
	I0103 19:31:43.657160   33509 cni.go:136] 3 nodes found, recommending kindnet
	I0103 19:31:43.657178   33509 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:31:43.657203   33509 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.191 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484895 NodeName:multinode-484895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 19:31:43.657337   33509 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-484895"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:31:43.657422   33509 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-484895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:31:43.657483   33509 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 19:31:43.666188   33509 command_runner.go:130] > kubeadm
	I0103 19:31:43.666208   33509 command_runner.go:130] > kubectl
	I0103 19:31:43.666211   33509 command_runner.go:130] > kubelet
	I0103 19:31:43.666230   33509 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:31:43.666293   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 19:31:43.674756   33509 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0103 19:31:43.690335   33509 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 19:31:43.706616   33509 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0103 19:31:43.723914   33509 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I0103 19:31:43.727247   33509 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:31:43.738911   33509 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895 for IP: 192.168.39.191
	I0103 19:31:43.738942   33509 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:31:43.739071   33509 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 19:31:43.739115   33509 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 19:31:43.739176   33509 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key
	I0103 19:31:43.739239   33509 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key.6f081b7d
	I0103 19:31:43.739278   33509 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.key
	I0103 19:31:43.739288   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0103 19:31:43.739299   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0103 19:31:43.739311   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0103 19:31:43.739323   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0103 19:31:43.739337   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 19:31:43.739354   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 19:31:43.739370   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 19:31:43.739393   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 19:31:43.739463   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 19:31:43.739504   33509 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 19:31:43.739516   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:31:43.739538   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:31:43.739561   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:31:43.739604   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 19:31:43.739646   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:31:43.739674   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0103 19:31:43.739687   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:31:43.739699   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0103 19:31:43.740236   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 19:31:43.762821   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 19:31:43.784182   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 19:31:43.805370   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 19:31:43.829535   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:31:43.850890   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 19:31:43.872181   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:31:43.893913   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 19:31:43.914023   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 19:31:43.935139   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:31:43.956308   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 19:31:43.977330   33509 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 19:31:43.991950   33509 ssh_runner.go:195] Run: openssl version
	I0103 19:31:43.996652   33509 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0103 19:31:43.996872   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:31:44.005785   33509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:31:44.010031   33509 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:31:44.010176   33509 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:31:44.010224   33509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:31:44.015183   33509 command_runner.go:130] > b5213941
	I0103 19:31:44.015422   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:31:44.024305   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 19:31:44.033079   33509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 19:31:44.037244   33509 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:31:44.037421   33509 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:31:44.037476   33509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 19:31:44.042405   33509 command_runner.go:130] > 51391683
	I0103 19:31:44.042830   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 19:31:44.051931   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 19:31:44.060813   33509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 19:31:44.064827   33509 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:31:44.064936   33509 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:31:44.064989   33509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 19:31:44.069863   33509 command_runner.go:130] > 3ec20f2e
	I0103 19:31:44.070042   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:31:44.078773   33509 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:31:44.082820   33509 command_runner.go:130] > ca.crt
	I0103 19:31:44.082841   33509 command_runner.go:130] > ca.key
	I0103 19:31:44.082850   33509 command_runner.go:130] > healthcheck-client.crt
	I0103 19:31:44.082858   33509 command_runner.go:130] > healthcheck-client.key
	I0103 19:31:44.082866   33509 command_runner.go:130] > peer.crt
	I0103 19:31:44.082875   33509 command_runner.go:130] > peer.key
	I0103 19:31:44.082883   33509 command_runner.go:130] > server.crt
	I0103 19:31:44.082892   33509 command_runner.go:130] > server.key
	I0103 19:31:44.082951   33509 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 19:31:44.088167   33509 command_runner.go:130] > Certificate will not expire
	I0103 19:31:44.088310   33509 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 19:31:44.093816   33509 command_runner.go:130] > Certificate will not expire
	I0103 19:31:44.093899   33509 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 19:31:44.099153   33509 command_runner.go:130] > Certificate will not expire
	I0103 19:31:44.099453   33509 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 19:31:44.104610   33509 command_runner.go:130] > Certificate will not expire
	I0103 19:31:44.104782   33509 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 19:31:44.109925   33509 command_runner.go:130] > Certificate will not expire
	I0103 19:31:44.110192   33509 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 19:31:44.115260   33509 command_runner.go:130] > Certificate will not expire
	I0103 19:31:44.115556   33509 kubeadm.go:404] StartCluster: {Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:31:44.115715   33509 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 19:31:44.115756   33509 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 19:31:44.152335   33509 cri.go:89] found id: ""
	I0103 19:31:44.152420   33509 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 19:31:44.161348   33509 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0103 19:31:44.161369   33509 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0103 19:31:44.161388   33509 command_runner.go:130] > /var/lib/minikube/etcd:
	I0103 19:31:44.161392   33509 command_runner.go:130] > member
	I0103 19:31:44.161515   33509 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 19:31:44.161536   33509 kubeadm.go:636] restartCluster start
	I0103 19:31:44.161599   33509 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 19:31:44.170384   33509 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:44.170927   33509 kubeconfig.go:92] found "multinode-484895" server: "https://192.168.39.191:8443"
	I0103 19:31:44.171374   33509 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:31:44.171625   33509 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:31:44.172191   33509 cert_rotation.go:137] Starting client certificate rotation controller
	I0103 19:31:44.172331   33509 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 19:31:44.180249   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:44.180315   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:44.190463   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:44.681229   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:44.681346   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:44.692102   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:45.180670   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:45.180792   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:45.191906   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:45.680426   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:45.680513   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:45.691261   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:46.180899   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:46.181013   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:46.191582   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:46.681224   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:46.681322   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:46.692337   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:47.180947   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:47.181038   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:47.192305   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:47.681093   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:47.681161   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:47.692431   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:48.181104   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:48.181188   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:48.191730   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:48.680277   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:48.680370   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:48.691225   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:49.180776   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:49.180877   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:49.191631   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:49.680617   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:49.680706   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:49.692041   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:50.180289   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:50.180377   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:50.191068   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:50.680623   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:50.680727   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:50.691739   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:51.181306   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:51.181423   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:51.192484   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:51.680796   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:51.680891   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:51.692056   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:52.180610   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:52.180704   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:52.192255   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:52.681255   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:52.681331   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:52.692627   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:53.181260   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:53.181354   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:53.192478   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:53.681065   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:53.681147   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:53.692432   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:54.181309   33509 api_server.go:166] Checking apiserver status ...
	I0103 19:31:54.181381   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 19:31:54.192349   33509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 19:31:54.192377   33509 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 19:31:54.192387   33509 kubeadm.go:1135] stopping kube-system containers ...
	I0103 19:31:54.192398   33509 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 19:31:54.192467   33509 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 19:31:54.230387   33509 cri.go:89] found id: ""
	I0103 19:31:54.230479   33509 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 19:31:54.245470   33509 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 19:31:54.254144   33509 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0103 19:31:54.254184   33509 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0103 19:31:54.254196   33509 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0103 19:31:54.254444   33509 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 19:31:54.254940   33509 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 19:31:54.255006   33509 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 19:31:54.263194   33509 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 19:31:54.263220   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 19:31:54.369465   33509 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 19:31:54.369905   33509 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0103 19:31:54.370294   33509 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0103 19:31:54.370755   33509 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0103 19:31:54.371315   33509 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0103 19:31:54.371845   33509 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0103 19:31:54.372653   33509 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0103 19:31:54.373092   33509 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0103 19:31:54.373575   33509 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0103 19:31:54.373995   33509 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0103 19:31:54.374466   33509 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0103 19:31:54.375053   33509 command_runner.go:130] > [certs] Using the existing "sa" key
	I0103 19:31:54.376411   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 19:31:54.424060   33509 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 19:31:54.673127   33509 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 19:31:55.064731   33509 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 19:31:55.152537   33509 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 19:31:55.376767   33509 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 19:31:55.380176   33509 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.003739022s)
	I0103 19:31:55.380201   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 19:31:55.445371   33509 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:31:55.451359   33509 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:31:55.451388   33509 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0103 19:31:55.576945   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 19:31:55.661077   33509 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 19:31:55.661103   33509 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 19:31:55.661113   33509 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 19:31:55.661124   33509 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 19:31:55.661150   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 19:31:55.724775   33509 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 19:31:55.728044   33509 api_server.go:52] waiting for apiserver process to appear ...
	I0103 19:31:55.728141   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:31:56.228671   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:31:56.728471   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:31:57.228838   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:31:57.728868   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:31:58.229145   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:31:58.253441   33509 command_runner.go:130] > 1089
	I0103 19:31:58.253532   33509 api_server.go:72] duration metric: took 2.52549262s to wait for apiserver process to appear ...
	I0103 19:31:58.253548   33509 api_server.go:88] waiting for apiserver healthz status ...
	I0103 19:31:58.253569   33509 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:32:02.036647   33509 api_server.go:279] https://192.168.39.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 19:32:02.036680   33509 api_server.go:103] status: https://192.168.39.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 19:32:02.036694   33509 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:32:02.070791   33509 api_server.go:279] https://192.168.39.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 19:32:02.070826   33509 api_server.go:103] status: https://192.168.39.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 19:32:02.254160   33509 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:32:02.263075   33509 api_server.go:279] https://192.168.39.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 19:32:02.263110   33509 api_server.go:103] status: https://192.168.39.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 19:32:02.754231   33509 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:32:02.759905   33509 api_server.go:279] https://192.168.39.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 19:32:02.759934   33509 api_server.go:103] status: https://192.168.39.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 19:32:03.254570   33509 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:32:03.261189   33509 api_server.go:279] https://192.168.39.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 19:32:03.261215   33509 api_server.go:103] status: https://192.168.39.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 19:32:03.753772   33509 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:32:03.761086   33509 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I0103 19:32:03.761180   33509 round_trippers.go:463] GET https://192.168.39.191:8443/version
	I0103 19:32:03.761191   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:03.761203   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:03.761218   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:03.771114   33509 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0103 19:32:03.771145   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:03.771155   33509 round_trippers.go:580]     Audit-Id: afa5d547-cf79-40a1-85f5-b02b55e6c4eb
	I0103 19:32:03.771164   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:03.771173   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:03.771188   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:03.771195   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:03.771203   33509 round_trippers.go:580]     Content-Length: 264
	I0103 19:32:03.771210   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:03 GMT
	I0103 19:32:03.771238   33509 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0103 19:32:03.771341   33509 api_server.go:141] control plane version: v1.28.4
	I0103 19:32:03.771367   33509 api_server.go:131] duration metric: took 5.517812272s to wait for apiserver health ...
	I0103 19:32:03.771379   33509 cni.go:84] Creating CNI manager for ""
	I0103 19:32:03.771386   33509 cni.go:136] 3 nodes found, recommending kindnet
	I0103 19:32:03.773675   33509 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0103 19:32:03.775126   33509 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 19:32:03.783438   33509 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0103 19:32:03.783468   33509 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0103 19:32:03.783478   33509 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0103 19:32:03.783492   33509 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:32:03.783507   33509 command_runner.go:130] > Access: 2024-01-03 19:31:29.762982388 +0000
	I0103 19:32:03.783519   33509 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0103 19:32:03.783531   33509 command_runner.go:130] > Change: 2024-01-03 19:31:27.994982388 +0000
	I0103 19:32:03.783541   33509 command_runner.go:130] >  Birth: -
	I0103 19:32:03.783680   33509 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 19:32:03.783697   33509 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 19:32:03.838971   33509 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 19:32:05.018583   33509 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:32:05.018611   33509 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:32:05.018619   33509 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0103 19:32:05.018627   33509 command_runner.go:130] > daemonset.apps/kindnet configured
	I0103 19:32:05.018652   33509 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.179651398s)
	I0103 19:32:05.018676   33509 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 19:32:05.018780   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:32:05.018790   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.018800   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.018810   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.023173   33509 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:32:05.023204   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.023214   33509 round_trippers.go:580]     Audit-Id: c7cc0cb4-10ea-4ab1-b4dd-6615ca1fd052
	I0103 19:32:05.023222   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.023234   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.023244   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.023257   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.023266   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:04 GMT
	I0103 19:32:05.024595   33509 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"757"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82638 chars]
	I0103 19:32:05.028597   33509 system_pods.go:59] 12 kube-system pods found
	I0103 19:32:05.028628   33509 system_pods.go:61] "coredns-5dd5756b68-wzsqb" [9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 19:32:05.028643   33509 system_pods.go:61] "etcd-multinode-484895" [2b5f9dc7-2d61-4968-9b9a-cfc029c9522b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 19:32:05.028653   33509 system_pods.go:61] "kindnet-gqgk2" [8d4f9028-52ad-44dd-83be-0bb7cc590b7f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0103 19:32:05.028666   33509 system_pods.go:61] "kindnet-lfkpk" [69692e6a-42a1-48d7-aec1-d192a3e793ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0103 19:32:05.028677   33509 system_pods.go:61] "kindnet-zt7zf" [410b1bf2-5e4a-4c3d-8cbb-4145b96b8e3e] Running
	I0103 19:32:05.028692   33509 system_pods.go:61] "kube-apiserver-multinode-484895" [f9f36416-b761-4534-8e09-bc3c94813149] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 19:32:05.028706   33509 system_pods.go:61] "kube-controller-manager-multinode-484895" [a04de258-1f92-4ac7-8f30-18ad9ebb6d40] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 19:32:05.028718   33509 system_pods.go:61] "kube-proxy-k7jnm" [4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3] Running
	I0103 19:32:05.028728   33509 system_pods.go:61] "kube-proxy-strp6" [f16942b4-2697-4fd7-88f7-3699e16bff79] Running
	I0103 19:32:05.028735   33509 system_pods.go:61] "kube-proxy-tp9s2" [728b1db9-b145-4ad3-b366-7fd8306d7a2a] Running
	I0103 19:32:05.028747   33509 system_pods.go:61] "kube-scheduler-multinode-484895" [f981e6c0-1f4a-44ed-b043-c69ef28b4fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 19:32:05.028762   33509 system_pods.go:61] "storage-provisioner" [82edd1c3-f361-4f86-8d59-8b89193d7a31] Running
	I0103 19:32:05.028775   33509 system_pods.go:74] duration metric: took 10.087792ms to wait for pod list to return data ...
	I0103 19:32:05.028787   33509 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:32:05.028856   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes
	I0103 19:32:05.028867   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.028877   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.028887   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.031854   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:05.031874   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.031884   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:04 GMT
	I0103 19:32:05.031893   33509 round_trippers.go:580]     Audit-Id: 5adbe4e0-0524-4e3d-95ea-d1df4d417b3c
	I0103 19:32:05.031900   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.031921   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.031929   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.031937   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.032370   33509 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"757"},"items":[{"metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"707","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15786 chars]
	I0103 19:32:05.033152   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:32:05.033178   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:32:05.033191   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:32:05.033198   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:32:05.033204   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:32:05.033215   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:32:05.033229   33509 node_conditions.go:105] duration metric: took 4.433792ms to run NodePressure ...
	I0103 19:32:05.033250   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 19:32:05.199195   33509 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0103 19:32:05.259946   33509 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0103 19:32:05.261449   33509 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 19:32:05.261576   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0103 19:32:05.261589   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.261600   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.261609   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.265566   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:05.265587   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.265597   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.265605   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.265613   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.265633   33509 round_trippers.go:580]     Audit-Id: cd1d2163-a813-4782-8e57-929d541b955d
	I0103 19:32:05.265645   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.265654   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.267368   33509 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"759"},"items":[{"metadata":{"name":"etcd-multinode-484895","namespace":"kube-system","uid":"2b5f9dc7-2d61-4968-9b9a-cfc029c9522b","resourceVersion":"750","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.191:2379","kubernetes.io/config.hash":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.mirror":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.seen":"2024-01-03T19:21:43.948366778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0103 19:32:05.268723   33509 kubeadm.go:787] kubelet initialised
	I0103 19:32:05.268749   33509 kubeadm.go:788] duration metric: took 7.273429ms waiting for restarted kubelet to initialise ...
	I0103 19:32:05.268758   33509 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:32:05.268839   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:32:05.268851   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.268860   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.268871   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.273764   33509 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:32:05.273783   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.273792   33509 round_trippers.go:580]     Audit-Id: 8af1dfd8-60ca-452c-a29c-1a7455b0f8d8
	I0103 19:32:05.273800   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.273807   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.273814   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.273823   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.273832   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.276340   33509 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"759"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82638 chars]
	I0103 19:32:05.278882   33509 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:05.278973   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:05.278982   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.278990   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.278996   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.281926   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:05.281944   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.281955   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.281964   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.281973   33509 round_trippers.go:580]     Audit-Id: 32ed5e7c-3a05-4a36-96e6-d6a4f4dd21aa
	I0103 19:32:05.281985   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.281991   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.281996   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.282602   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:05.283140   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:05.283157   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.283164   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.283171   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.288071   33509 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:32:05.288089   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.288096   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.288102   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.288107   33509 round_trippers.go:580]     Audit-Id: de34ce6d-6e64-45e5-97ad-1a16cfa8aceb
	I0103 19:32:05.288118   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.288125   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.288133   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.288811   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"707","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0103 19:32:05.289195   33509 pod_ready.go:97] node "multinode-484895" hosting pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:05.289226   33509 pod_ready.go:81] duration metric: took 10.320469ms waiting for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	E0103 19:32:05.289241   33509 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484895" hosting pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:05.289251   33509 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:05.289363   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484895
	I0103 19:32:05.289376   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.289387   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.289400   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.292073   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:05.292089   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.292095   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.292101   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.292106   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.292111   33509 round_trippers.go:580]     Audit-Id: acb36a16-f9bd-4be8-b891-1624f2e255a9
	I0103 19:32:05.292118   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.292125   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.292439   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484895","namespace":"kube-system","uid":"2b5f9dc7-2d61-4968-9b9a-cfc029c9522b","resourceVersion":"750","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.191:2379","kubernetes.io/config.hash":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.mirror":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.seen":"2024-01-03T19:21:43.948366778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0103 19:32:05.292798   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:05.292810   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.292817   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.292827   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.296832   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:05.296846   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.296853   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.296859   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.296864   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.296870   33509 round_trippers.go:580]     Audit-Id: f0d2d785-74b9-4610-9095-8e35d2d3a3a1
	I0103 19:32:05.296879   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.296891   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.297245   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"707","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0103 19:32:05.297627   33509 pod_ready.go:97] node "multinode-484895" hosting pod "etcd-multinode-484895" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:05.297669   33509 pod_ready.go:81] duration metric: took 8.404304ms waiting for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	E0103 19:32:05.297684   33509 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484895" hosting pod "etcd-multinode-484895" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:05.297702   33509 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:05.297773   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484895
	I0103 19:32:05.297783   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.297791   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.297799   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.301138   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:05.301156   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.301162   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.301167   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.301172   33509 round_trippers.go:580]     Audit-Id: cc85f580-da8d-4b33-ab76-3b5a6cfcbe18
	I0103 19:32:05.301177   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.301183   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.301191   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.301405   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484895","namespace":"kube-system","uid":"f9f36416-b761-4534-8e09-bc3c94813149","resourceVersion":"747","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.191:8443","kubernetes.io/config.hash":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.mirror":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.seen":"2024-01-03T19:21:43.948370781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0103 19:32:05.301782   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:05.301794   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.301800   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.301813   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.305475   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:05.305490   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.305497   33509 round_trippers.go:580]     Audit-Id: f7602135-c2f5-4c8f-be5f-951603464d4e
	I0103 19:32:05.305502   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.305510   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.305525   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.305538   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.305547   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.305955   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"707","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0103 19:32:05.306309   33509 pod_ready.go:97] node "multinode-484895" hosting pod "kube-apiserver-multinode-484895" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:05.306328   33509 pod_ready.go:81] duration metric: took 8.616655ms waiting for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	E0103 19:32:05.306339   33509 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484895" hosting pod "kube-apiserver-multinode-484895" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:05.306353   33509 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:05.306425   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:32:05.306436   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.306447   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.306459   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.309937   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:05.309954   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.309961   33509 round_trippers.go:580]     Audit-Id: 0fb4e57f-d29f-4766-ba69-540b71c0dafc
	I0103 19:32:05.309967   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.309979   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.309992   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.310004   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.310015   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.310652   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484895","namespace":"kube-system","uid":"a04de258-1f92-4ac7-8f30-18ad9ebb6d40","resourceVersion":"751","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.mirror":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.seen":"2024-01-03T19:21:43.948371847Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0103 19:32:05.419358   33509 request.go:629] Waited for 108.290573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:05.419410   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:05.419415   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.419435   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.419441   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.422396   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:05.422416   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.422423   33509 round_trippers.go:580]     Audit-Id: 02b33c9e-08d1-4785-87ee-eaa238edd19a
	I0103 19:32:05.422429   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.422434   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.422439   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.422445   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.422452   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.422780   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"707","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0103 19:32:05.423086   33509 pod_ready.go:97] node "multinode-484895" hosting pod "kube-controller-manager-multinode-484895" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:05.423103   33509 pod_ready.go:81] duration metric: took 116.735282ms waiting for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	E0103 19:32:05.423112   33509 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484895" hosting pod "kube-controller-manager-multinode-484895" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:05.423118   33509 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:05.619579   33509 request.go:629] Waited for 196.397332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:32:05.619663   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:32:05.619671   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.619682   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.619691   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.622882   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:05.622909   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.622920   33509 round_trippers.go:580]     Audit-Id: 0f262be5-6598-457d-9a19-d45bdb4d2167
	I0103 19:32:05.622930   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.622938   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.622944   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.622952   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.622960   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.623613   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k7jnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3","resourceVersion":"470","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0103 19:32:05.819444   33509 request.go:629] Waited for 195.38944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:32:05.819497   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:32:05.819502   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:05.819509   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:05.819515   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:05.823390   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:05.823431   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:05.823439   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:05.823445   33509 round_trippers.go:580]     Audit-Id: f1b7428e-31cc-42fb-84db-f40a521d0e22
	I0103 19:32:05.823451   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:05.823458   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:05.823466   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:05.823474   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:05.823632   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"670","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_24_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0103 19:32:05.824000   33509 pod_ready.go:92] pod "kube-proxy-k7jnm" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:05.824020   33509 pod_ready.go:81] duration metric: took 400.895444ms waiting for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:05.824030   33509 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-strp6" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:06.018897   33509 request.go:629] Waited for 194.791738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:32:06.018964   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:32:06.018969   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:06.018976   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:06.018985   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:06.021516   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:06.021541   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:06.021551   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:05 GMT
	I0103 19:32:06.021564   33509 round_trippers.go:580]     Audit-Id: 4f6bac42-0c02-4610-8c13-600a6f4e0d7b
	I0103 19:32:06.021573   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:06.021581   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:06.021594   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:06.021601   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:06.021772   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-strp6","generateName":"kube-proxy-","namespace":"kube-system","uid":"f16942b4-2697-4fd7-88f7-3699e16bff79","resourceVersion":"677","creationTimestamp":"2024-01-03T19:23:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:23:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0103 19:32:06.219678   33509 request.go:629] Waited for 197.366532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:32:06.219744   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:32:06.219750   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:06.219757   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:06.219763   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:06.222136   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:06.222155   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:06.222162   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:06.222168   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:06 GMT
	I0103 19:32:06.222173   33509 round_trippers.go:580]     Audit-Id: 58279060-509e-480f-9435-65ff5e2fbde2
	I0103 19:32:06.222178   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:06.222185   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:06.222190   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:06.222293   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m03","uid":"a1762911-aa8b-49cb-8632-51fb5a4220e2","resourceVersion":"695","creationTimestamp":"2024-01-03T19:24:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_24_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:24:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3396 chars]
	I0103 19:32:06.222572   33509 pod_ready.go:92] pod "kube-proxy-strp6" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:06.222587   33509 pod_ready.go:81] duration metric: took 398.551661ms waiting for pod "kube-proxy-strp6" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:06.222596   33509 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:06.419674   33509 request.go:629] Waited for 197.002345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:32:06.419765   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:32:06.419772   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:06.419786   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:06.419803   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:06.422382   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:06.422405   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:06.422413   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:06 GMT
	I0103 19:32:06.422418   33509 round_trippers.go:580]     Audit-Id: 948586c2-4bae-44aa-a3c9-0ea4c9945eeb
	I0103 19:32:06.422427   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:06.422432   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:06.422437   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:06.422443   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:06.422623   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp9s2","generateName":"kube-proxy-","namespace":"kube-system","uid":"728b1db9-b145-4ad3-b366-7fd8306d7a2a","resourceVersion":"757","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0103 19:32:06.619548   33509 request.go:629] Waited for 196.368802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:06.619618   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:06.619625   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:06.619635   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:06.619647   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:06.622228   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:06.622259   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:06.622269   33509 round_trippers.go:580]     Audit-Id: 4582f1b5-9970-40db-a474-2b4c3caf07ba
	I0103 19:32:06.622278   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:06.622286   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:06.622298   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:06.622309   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:06.622317   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:06 GMT
	I0103 19:32:06.622530   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"707","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0103 19:32:06.622898   33509 pod_ready.go:97] node "multinode-484895" hosting pod "kube-proxy-tp9s2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:06.622927   33509 pod_ready.go:81] duration metric: took 400.324258ms waiting for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	E0103 19:32:06.622939   33509 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484895" hosting pod "kube-proxy-tp9s2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:06.622947   33509 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:06.819818   33509 request.go:629] Waited for 196.793958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:32:06.819907   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:32:06.819920   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:06.819931   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:06.819940   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:06.823665   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:06.823689   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:06.823699   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:06.823708   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:06.823716   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:06.823727   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:06.823739   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:06 GMT
	I0103 19:32:06.823750   33509 round_trippers.go:580]     Audit-Id: eb822c11-32c1-4a00-9612-5cd4d906c52a
	I0103 19:32:06.823903   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484895","namespace":"kube-system","uid":"f981e6c0-1f4a-44ed-b043-c69ef28b4fa5","resourceVersion":"736","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.mirror":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.seen":"2024-01-03T19:21:43.948372698Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0103 19:32:07.019698   33509 request.go:629] Waited for 195.392123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:07.019784   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:07.019792   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:07.019806   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:07.019816   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:07.026668   33509 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0103 19:32:07.026697   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:07.026708   33509 round_trippers.go:580]     Audit-Id: ddfc0768-cbc3-4d59-aaa0-832cff3ab642
	I0103 19:32:07.026717   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:07.026725   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:07.026733   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:07.026745   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:07.026753   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:06 GMT
	I0103 19:32:07.027071   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"707","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0103 19:32:07.027406   33509 pod_ready.go:97] node "multinode-484895" hosting pod "kube-scheduler-multinode-484895" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:07.027426   33509 pod_ready.go:81] duration metric: took 404.472241ms waiting for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	E0103 19:32:07.027434   33509 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484895" hosting pod "kube-scheduler-multinode-484895" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484895" has status "Ready":"False"
	I0103 19:32:07.027446   33509 pod_ready.go:38] duration metric: took 1.758679537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:32:07.027466   33509 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 19:32:07.038048   33509 command_runner.go:130] > -16
	I0103 19:32:07.038090   33509 ops.go:34] apiserver oom_adj: -16
	I0103 19:32:07.038099   33509 kubeadm.go:640] restartCluster took 22.876555593s
	I0103 19:32:07.038109   33509 kubeadm.go:406] StartCluster complete in 22.922558888s
	I0103 19:32:07.038128   33509 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:32:07.038217   33509 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:32:07.038894   33509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:32:07.039128   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 19:32:07.039171   33509 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 19:32:07.042579   33509 out.go:177] * Enabled addons: 
	I0103 19:32:07.039414   33509 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:32:07.039471   33509 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:32:07.044068   33509 addons.go:508] enable addons completed in 4.907585ms: enabled=[]
	I0103 19:32:07.044376   33509 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:32:07.044717   33509 round_trippers.go:463] GET https://192.168.39.191:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:32:07.044727   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:07.044734   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:07.044741   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:07.047450   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:07.047470   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:07.047510   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:07.047523   33509 round_trippers.go:580]     Content-Length: 291
	I0103 19:32:07.047539   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:07 GMT
	I0103 19:32:07.047550   33509 round_trippers.go:580]     Audit-Id: df1dd2c3-a393-464e-ab1f-c5f5dd905eb3
	I0103 19:32:07.047560   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:07.047568   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:07.047589   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:07.047654   33509 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e2317390-8a66-46be-8656-5adca86177ea","resourceVersion":"758","creationTimestamp":"2024-01-03T19:21:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0103 19:32:07.047854   33509 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484895" context rescaled to 1 replicas
	I0103 19:32:07.047889   33509 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:32:07.049560   33509 out.go:177] * Verifying Kubernetes components...
	I0103 19:32:07.051107   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:32:07.141237   33509 command_runner.go:130] > apiVersion: v1
	I0103 19:32:07.141264   33509 command_runner.go:130] > data:
	I0103 19:32:07.141271   33509 command_runner.go:130] >   Corefile: |
	I0103 19:32:07.141277   33509 command_runner.go:130] >     .:53 {
	I0103 19:32:07.141283   33509 command_runner.go:130] >         log
	I0103 19:32:07.141289   33509 command_runner.go:130] >         errors
	I0103 19:32:07.141294   33509 command_runner.go:130] >         health {
	I0103 19:32:07.141300   33509 command_runner.go:130] >            lameduck 5s
	I0103 19:32:07.141306   33509 command_runner.go:130] >         }
	I0103 19:32:07.141313   33509 command_runner.go:130] >         ready
	I0103 19:32:07.141322   33509 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0103 19:32:07.141333   33509 command_runner.go:130] >            pods insecure
	I0103 19:32:07.141344   33509 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0103 19:32:07.141352   33509 command_runner.go:130] >            ttl 30
	I0103 19:32:07.141360   33509 command_runner.go:130] >         }
	I0103 19:32:07.141371   33509 command_runner.go:130] >         prometheus :9153
	I0103 19:32:07.141382   33509 command_runner.go:130] >         hosts {
	I0103 19:32:07.141395   33509 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0103 19:32:07.141403   33509 command_runner.go:130] >            fallthrough
	I0103 19:32:07.141410   33509 command_runner.go:130] >         }
	I0103 19:32:07.141420   33509 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0103 19:32:07.141431   33509 command_runner.go:130] >            max_concurrent 1000
	I0103 19:32:07.141441   33509 command_runner.go:130] >         }
	I0103 19:32:07.141448   33509 command_runner.go:130] >         cache 30
	I0103 19:32:07.141458   33509 command_runner.go:130] >         loop
	I0103 19:32:07.141467   33509 command_runner.go:130] >         reload
	I0103 19:32:07.141475   33509 command_runner.go:130] >         loadbalance
	I0103 19:32:07.141482   33509 command_runner.go:130] >     }
	I0103 19:32:07.141490   33509 command_runner.go:130] > kind: ConfigMap
	I0103 19:32:07.141506   33509 command_runner.go:130] > metadata:
	I0103 19:32:07.141519   33509 command_runner.go:130] >   creationTimestamp: "2024-01-03T19:21:43Z"
	I0103 19:32:07.141549   33509 command_runner.go:130] >   name: coredns
	I0103 19:32:07.141560   33509 command_runner.go:130] >   namespace: kube-system
	I0103 19:32:07.141567   33509 command_runner.go:130] >   resourceVersion: "359"
	I0103 19:32:07.141576   33509 command_runner.go:130] >   uid: e65758c8-7a81-43f3-915e-38ae133a6536
	I0103 19:32:07.143669   33509 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 19:32:07.143685   33509 node_ready.go:35] waiting up to 6m0s for node "multinode-484895" to be "Ready" ...
	I0103 19:32:07.219062   33509 request.go:629] Waited for 75.256795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:07.219152   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:07.219159   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:07.219169   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:07.219177   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:07.221935   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:07.221967   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:07.221975   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:07.221981   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:07.221986   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:07.221992   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:07 GMT
	I0103 19:32:07.221997   33509 round_trippers.go:580]     Audit-Id: b86fdeda-e5c9-4058-b7f2-c6e5502056a2
	I0103 19:32:07.222003   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:07.222180   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"707","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0103 19:32:07.644634   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:07.644662   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:07.644672   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:07.644678   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:07.647437   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:07.647464   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:07.647471   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:07.647477   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:07.647482   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:07.647487   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:07.647492   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:07 GMT
	I0103 19:32:07.647497   33509 round_trippers.go:580]     Audit-Id: 2b3c7ada-cd2a-43fb-a31c-fb9008bbfcc8
	I0103 19:32:07.647661   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:07.648036   33509 node_ready.go:49] node "multinode-484895" has status "Ready":"True"
	I0103 19:32:07.648058   33509 node_ready.go:38] duration metric: took 504.34857ms waiting for node "multinode-484895" to be "Ready" ...
	I0103 19:32:07.648069   33509 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:32:07.648133   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:32:07.648146   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:07.648156   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:07.648169   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:07.651677   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:07.651701   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:07.651711   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:07 GMT
	I0103 19:32:07.651719   33509 round_trippers.go:580]     Audit-Id: b8a34fdc-02c2-4da1-9a94-2501d82cd934
	I0103 19:32:07.651726   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:07.651734   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:07.651741   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:07.651751   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:07.652970   33509 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"764"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82957 chars]
	I0103 19:32:07.656666   33509 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:07.819165   33509 request.go:629] Waited for 162.401295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:07.819234   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:07.819249   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:07.819260   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:07.819270   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:07.823372   33509 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:32:07.823397   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:07.823407   33509 round_trippers.go:580]     Audit-Id: 2f5586da-0df3-4cea-9246-b620b2c5f6c2
	I0103 19:32:07.823415   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:07.823423   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:07.823434   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:07.823446   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:07.823454   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:07 GMT
	I0103 19:32:07.823630   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:08.019536   33509 request.go:629] Waited for 195.390651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:08.019604   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:08.019611   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:08.019619   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:08.019625   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:08.022588   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:08.022607   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:08.022614   33509 round_trippers.go:580]     Audit-Id: 377aaee8-3214-4ea8-870b-2ccbe6503bf9
	I0103 19:32:08.022620   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:08.022625   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:08.022708   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:08.022734   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:08.022751   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:07 GMT
	I0103 19:32:08.022894   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:08.219463   33509 request.go:629] Waited for 62.254967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:08.219548   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:08.219554   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:08.219564   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:08.219573   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:08.222778   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:08.222799   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:08.222808   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:08.222815   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:08.222823   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:08.222830   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:08.222837   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:08 GMT
	I0103 19:32:08.222844   33509 round_trippers.go:580]     Audit-Id: 457bb454-f456-4644-aae2-823bccfe040c
	I0103 19:32:08.223172   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:08.418913   33509 request.go:629] Waited for 195.313026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:08.418987   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:08.418992   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:08.419012   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:08.419021   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:08.421817   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:08.421838   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:08.421852   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:08.421858   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:08.421863   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:08 GMT
	I0103 19:32:08.421868   33509 round_trippers.go:580]     Audit-Id: 20f8ec6e-d82d-45f8-8d58-84540f8eef6a
	I0103 19:32:08.421873   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:08.421879   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:08.422056   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:08.657469   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:08.657506   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:08.657518   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:08.657527   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:08.661095   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:08.661117   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:08.661139   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:08 GMT
	I0103 19:32:08.661147   33509 round_trippers.go:580]     Audit-Id: 6c88d72c-f825-4767-b5f4-9f69e510c44e
	I0103 19:32:08.661155   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:08.661164   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:08.661178   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:08.661204   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:08.661961   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:08.819751   33509 request.go:629] Waited for 157.366936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:08.819846   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:08.819853   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:08.819863   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:08.819871   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:08.822678   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:08.822697   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:08.822716   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:08.822721   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:08 GMT
	I0103 19:32:08.822726   33509 round_trippers.go:580]     Audit-Id: b560f5e3-63ab-45bd-ba4b-f7a42a769c7f
	I0103 19:32:08.822738   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:08.822749   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:08.822759   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:08.823090   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:09.157740   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:09.157769   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:09.157777   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:09.157783   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:09.160829   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:09.160852   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:09.160860   33509 round_trippers.go:580]     Audit-Id: 762cf36c-8a5b-4c9a-b0f9-f4d9d0e4f084
	I0103 19:32:09.160865   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:09.160870   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:09.160875   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:09.160880   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:09.160885   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:09 GMT
	I0103 19:32:09.161067   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:09.218802   33509 request.go:629] Waited for 57.177377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:09.218885   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:09.218892   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:09.218907   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:09.218923   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:09.221616   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:09.221646   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:09.221653   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:09.221659   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:09.221664   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:09 GMT
	I0103 19:32:09.221669   33509 round_trippers.go:580]     Audit-Id: a014ed1b-97fc-4018-b18d-e844adc0fdc1
	I0103 19:32:09.221674   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:09.221679   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:09.221827   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:09.657940   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:09.657962   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:09.657981   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:09.657989   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:09.660931   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:09.660953   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:09.660960   33509 round_trippers.go:580]     Audit-Id: c74e92f5-63ff-4d6c-9c69-7d0739e71e76
	I0103 19:32:09.660966   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:09.660971   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:09.660976   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:09.660984   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:09.660992   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:09 GMT
	I0103 19:32:09.661362   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:09.661825   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:09.661841   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:09.661853   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:09.661861   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:09.664061   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:09.664095   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:09.664105   33509 round_trippers.go:580]     Audit-Id: 90147f40-aa61-4de0-a678-1cf1a398da7c
	I0103 19:32:09.664114   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:09.664123   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:09.664131   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:09.664142   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:09.664151   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:09 GMT
	I0103 19:32:09.664266   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:09.664659   33509 pod_ready.go:102] pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace has status "Ready":"False"
	I0103 19:32:10.157672   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:10.157702   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:10.157712   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:10.157721   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:10.160623   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:10.160644   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:10.160651   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:10.160656   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:10.160662   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:10.160670   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:10 GMT
	I0103 19:32:10.160679   33509 round_trippers.go:580]     Audit-Id: 4dbc9bba-ac17-4eed-9e8d-ce9ab51f2c1e
	I0103 19:32:10.160688   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:10.160890   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:10.161430   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:10.161445   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:10.161455   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:10.161464   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:10.163721   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:10.163738   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:10.163747   33509 round_trippers.go:580]     Audit-Id: 2da7e2dc-a424-411c-8650-f780c24aef97
	I0103 19:32:10.163754   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:10.163762   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:10.163769   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:10.163776   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:10.163785   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:10 GMT
	I0103 19:32:10.164217   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:10.657768   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:10.657791   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:10.657802   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:10.657810   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:10.662757   33509 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:32:10.662782   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:10.662793   33509 round_trippers.go:580]     Audit-Id: 2611c325-b103-4417-a770-d885629ffb0f
	I0103 19:32:10.662800   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:10.662807   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:10.662814   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:10.662822   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:10.662831   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:10 GMT
	I0103 19:32:10.663014   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:10.663509   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:10.663530   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:10.663546   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:10.663556   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:10.670402   33509 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0103 19:32:10.670423   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:10.670430   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:10.670436   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:10 GMT
	I0103 19:32:10.670441   33509 round_trippers.go:580]     Audit-Id: 6b2f6e36-c5fa-427d-a73e-9510eb830e21
	I0103 19:32:10.670445   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:10.670450   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:10.670455   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:10.671212   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:11.156823   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:11.156869   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:11.156879   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:11.156888   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:11.159763   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:11.159790   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:11.159800   33509 round_trippers.go:580]     Audit-Id: e872db19-1731-452f-a5d0-fff50f1d6149
	I0103 19:32:11.159809   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:11.159817   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:11.159824   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:11.159844   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:11.159853   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:11 GMT
	I0103 19:32:11.160052   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:11.160481   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:11.160527   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:11.160539   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:11.160556   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:11.163962   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:11.163986   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:11.163995   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:11.164003   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:11.164011   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:11.164020   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:11 GMT
	I0103 19:32:11.164029   33509 round_trippers.go:580]     Audit-Id: bcf0e7f3-c020-4a06-9d65-f2d61de26171
	I0103 19:32:11.164042   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:11.164472   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:11.657118   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:11.657150   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:11.657168   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:11.657177   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:11.660062   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:11.660085   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:11.660092   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:11.660098   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:11.660103   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:11 GMT
	I0103 19:32:11.660108   33509 round_trippers.go:580]     Audit-Id: 0d64cf3e-0808-4c3f-a284-ca5af485eff2
	I0103 19:32:11.660113   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:11.660118   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:11.660506   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"744","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0103 19:32:11.661057   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:11.661072   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:11.661079   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:11.661087   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:11.663221   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:11.663237   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:11.663244   33509 round_trippers.go:580]     Audit-Id: a28bb192-b857-4bdb-81ef-0bc9020578a7
	I0103 19:32:11.663249   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:11.663256   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:11.663262   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:11.663269   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:11.663274   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:11 GMT
	I0103 19:32:11.663436   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:12.157047   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:32:12.157074   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:12.157088   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:12.157098   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:12.159797   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:12.159818   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:12.159828   33509 round_trippers.go:580]     Audit-Id: a5a08dca-9ff1-46bf-8177-9afbca0305b1
	I0103 19:32:12.159835   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:12.159843   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:12.159851   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:12.159863   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:12.159873   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:12 GMT
	I0103 19:32:12.160072   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"833","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0103 19:32:12.160509   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:12.160526   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:12.160541   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:12.160550   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:12.162769   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:12.162785   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:12.162791   33509 round_trippers.go:580]     Audit-Id: 822008b6-96a3-47e1-9f2a-19978f102878
	I0103 19:32:12.162797   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:12.162804   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:12.162809   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:12.162814   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:12.162819   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:12 GMT
	I0103 19:32:12.162985   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:12.163258   33509 pod_ready.go:92] pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:12.163272   33509 pod_ready.go:81] duration metric: took 4.506580874s waiting for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:12.163280   33509 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:12.163331   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484895
	I0103 19:32:12.163338   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:12.163345   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:12.163351   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:12.165542   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:12.165563   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:12.165577   33509 round_trippers.go:580]     Audit-Id: 19075a6d-0b76-4837-b9ce-467eb16e0070
	I0103 19:32:12.165585   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:12.165592   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:12.165601   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:12.165612   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:12.165627   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:12 GMT
	I0103 19:32:12.165778   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484895","namespace":"kube-system","uid":"2b5f9dc7-2d61-4968-9b9a-cfc029c9522b","resourceVersion":"825","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.191:2379","kubernetes.io/config.hash":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.mirror":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.seen":"2024-01-03T19:21:43.948366778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0103 19:32:12.166238   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:12.166254   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:12.166261   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:12.166267   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:12.168245   33509 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:32:12.168260   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:12.168266   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:12.168271   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:12.168282   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:12 GMT
	I0103 19:32:12.168293   33509 round_trippers.go:580]     Audit-Id: d6e6c053-6e74-4ab4-9396-fd1de96b53b4
	I0103 19:32:12.168301   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:12.168310   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:12.168798   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:12.169052   33509 pod_ready.go:92] pod "etcd-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:12.169065   33509 pod_ready.go:81] duration metric: took 5.780294ms waiting for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:12.169079   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:12.219357   33509 request.go:629] Waited for 50.206215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484895
	I0103 19:32:12.219416   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484895
	I0103 19:32:12.219421   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:12.219429   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:12.219434   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:12.222229   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:12.222250   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:12.222257   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:12 GMT
	I0103 19:32:12.222263   33509 round_trippers.go:580]     Audit-Id: 61bbe51d-c1f8-42b2-b61a-28a221a992b1
	I0103 19:32:12.222268   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:12.222273   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:12.222278   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:12.222286   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:12.222433   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484895","namespace":"kube-system","uid":"f9f36416-b761-4534-8e09-bc3c94813149","resourceVersion":"827","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.191:8443","kubernetes.io/config.hash":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.mirror":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.seen":"2024-01-03T19:21:43.948370781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0103 19:32:12.419216   33509 request.go:629] Waited for 196.361124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:12.419287   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:12.419292   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:12.419300   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:12.419309   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:12.422312   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:12.422332   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:12.422340   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:12.422345   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:12 GMT
	I0103 19:32:12.422350   33509 round_trippers.go:580]     Audit-Id: ee61c594-0f69-491b-961e-0430803f37df
	I0103 19:32:12.422355   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:12.422360   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:12.422366   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:12.422505   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:12.422928   33509 pod_ready.go:92] pod "kube-apiserver-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:12.422948   33509 pod_ready.go:81] duration metric: took 253.861595ms waiting for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:12.422961   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:12.619427   33509 request.go:629] Waited for 196.406357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:32:12.619516   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:32:12.619522   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:12.619530   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:12.619540   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:12.622795   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:12.622817   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:12.622826   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:12.622834   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:12.622841   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:12.622847   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:12.622855   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:12 GMT
	I0103 19:32:12.622863   33509 round_trippers.go:580]     Audit-Id: 780a36d7-9367-47cf-b1f6-82c8fc24ebb9
	I0103 19:32:12.623046   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484895","namespace":"kube-system","uid":"a04de258-1f92-4ac7-8f30-18ad9ebb6d40","resourceVersion":"751","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.mirror":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.seen":"2024-01-03T19:21:43.948371847Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0103 19:32:12.819877   33509 request.go:629] Waited for 196.39073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:12.819932   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:12.819937   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:12.819945   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:12.819951   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:12.822464   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:12.822487   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:12.822499   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:12.822507   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:12.822537   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:12.822547   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:12.822555   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:12 GMT
	I0103 19:32:12.822562   33509 round_trippers.go:580]     Audit-Id: a43eee8f-040a-486e-8b97-d59061866b3f
	I0103 19:32:12.822766   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:13.019259   33509 request.go:629] Waited for 95.25882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:32:13.019323   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:32:13.019328   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:13.019336   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:13.019341   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:13.022009   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:13.022032   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:13.022041   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:13.022058   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:13.022067   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:13.022079   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:13.022088   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:13 GMT
	I0103 19:32:13.022105   33509 round_trippers.go:580]     Audit-Id: fe375bd0-a36e-4d61-8602-dd69104ec070
	I0103 19:32:13.022302   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484895","namespace":"kube-system","uid":"a04de258-1f92-4ac7-8f30-18ad9ebb6d40","resourceVersion":"751","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.mirror":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.seen":"2024-01-03T19:21:43.948371847Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0103 19:32:13.219168   33509 request.go:629] Waited for 196.417421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:13.219264   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:13.219271   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:13.219284   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:13.219294   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:13.222095   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:13.222116   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:13.222133   33509 round_trippers.go:580]     Audit-Id: 16ffbc70-8ec2-41dd-b4dd-300dd2f1b792
	I0103 19:32:13.222139   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:13.222166   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:13.222172   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:13.222178   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:13.222188   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:13 GMT
	I0103 19:32:13.222406   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:13.423763   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:32:13.423790   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:13.423797   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:13.423803   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:13.426910   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:13.426938   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:13.426948   33509 round_trippers.go:580]     Audit-Id: 42cb56a1-357a-442f-b178-a96d7f60cbc8
	I0103 19:32:13.426956   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:13.426962   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:13.426969   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:13.426976   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:13.426983   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:13 GMT
	I0103 19:32:13.427169   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484895","namespace":"kube-system","uid":"a04de258-1f92-4ac7-8f30-18ad9ebb6d40","resourceVersion":"838","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.mirror":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.seen":"2024-01-03T19:21:43.948371847Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0103 19:32:13.619899   33509 request.go:629] Waited for 192.268413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:13.620009   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:13.620027   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:13.620041   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:13.620062   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:13.623144   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:13.623170   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:13.623182   33509 round_trippers.go:580]     Audit-Id: a892544c-6544-49c4-aabd-cb6763a13ff3
	I0103 19:32:13.623198   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:13.623209   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:13.623217   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:13.623228   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:13.623238   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:13 GMT
	I0103 19:32:13.623394   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:13.623795   33509 pod_ready.go:92] pod "kube-controller-manager-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:13.623816   33509 pod_ready.go:81] duration metric: took 1.200846772s waiting for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:13.623827   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:13.819293   33509 request.go:629] Waited for 195.383597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:32:13.819450   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:32:13.819474   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:13.819483   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:13.819489   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:13.822426   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:13.822448   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:13.822458   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:13.822466   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:13.822472   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:13 GMT
	I0103 19:32:13.822488   33509 round_trippers.go:580]     Audit-Id: 5f98ae2b-d13c-4e30-b4c3-f2f92a02f0c0
	I0103 19:32:13.822495   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:13.822505   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:13.822689   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k7jnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3","resourceVersion":"470","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0103 19:32:14.019726   33509 request.go:629] Waited for 196.458371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:32:14.019803   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:32:14.019809   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:14.019816   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:14.019822   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:14.024147   33509 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:32:14.024185   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:14.024197   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:14.024205   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:14.024213   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:14 GMT
	I0103 19:32:14.024221   33509 round_trippers.go:580]     Audit-Id: 29906f4f-68f2-4360-8d3c-8503837f85d6
	I0103 19:32:14.024233   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:14.024241   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:14.024422   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f","resourceVersion":"763","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_24_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0103 19:32:14.024721   33509 pod_ready.go:92] pod "kube-proxy-k7jnm" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:14.024737   33509 pod_ready.go:81] duration metric: took 400.903634ms waiting for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:14.024746   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-strp6" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:14.218996   33509 request.go:629] Waited for 194.198596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:32:14.219075   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:32:14.219081   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:14.219088   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:14.219094   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:14.221964   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:14.221982   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:14.221988   33509 round_trippers.go:580]     Audit-Id: a8da5caa-1778-4627-9643-354bb03febb4
	I0103 19:32:14.221994   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:14.221999   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:14.222007   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:14.222015   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:14.222023   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:14 GMT
	I0103 19:32:14.222223   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-strp6","generateName":"kube-proxy-","namespace":"kube-system","uid":"f16942b4-2697-4fd7-88f7-3699e16bff79","resourceVersion":"677","creationTimestamp":"2024-01-03T19:23:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:23:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0103 19:32:14.419180   33509 request.go:629] Waited for 196.350622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:32:14.419247   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:32:14.419255   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:14.419296   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:14.419319   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:14.424369   33509 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0103 19:32:14.424396   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:14.424414   33509 round_trippers.go:580]     Audit-Id: 7597327c-bc65-46ba-8dc6-410b1c83388e
	I0103 19:32:14.424422   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:14.424430   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:14.424438   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:14.424446   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:14.424454   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:14 GMT
	I0103 19:32:14.424578   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m03","uid":"a1762911-aa8b-49cb-8632-51fb5a4220e2","resourceVersion":"761","creationTimestamp":"2024-01-03T19:24:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_24_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:24:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0103 19:32:14.425003   33509 pod_ready.go:92] pod "kube-proxy-strp6" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:14.425031   33509 pod_ready.go:81] duration metric: took 400.278837ms waiting for pod "kube-proxy-strp6" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:14.425044   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:14.619488   33509 request.go:629] Waited for 194.380722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:32:14.619568   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:32:14.619579   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:14.619594   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:14.619607   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:14.622251   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:14.622277   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:14.622287   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:14 GMT
	I0103 19:32:14.622299   33509 round_trippers.go:580]     Audit-Id: d217275d-67f0-4239-96a0-32b5b4dda8f6
	I0103 19:32:14.622310   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:14.622322   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:14.622331   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:14.622339   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:14.622460   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp9s2","generateName":"kube-proxy-","namespace":"kube-system","uid":"728b1db9-b145-4ad3-b366-7fd8306d7a2a","resourceVersion":"757","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0103 19:32:14.819179   33509 request.go:629] Waited for 196.269146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:14.819239   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:14.819246   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:14.819254   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:14.819262   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:14.823718   33509 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:32:14.823740   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:14.823747   33509 round_trippers.go:580]     Audit-Id: bb1e1475-74cc-40c7-a739-81d3a6844d0c
	I0103 19:32:14.823752   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:14.823757   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:14.823762   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:14.823768   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:14.823773   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:14 GMT
	I0103 19:32:14.824387   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:14.824759   33509 pod_ready.go:92] pod "kube-proxy-tp9s2" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:14.824783   33509 pod_ready.go:81] duration metric: took 399.725854ms waiting for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:14.824797   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:15.019811   33509 request.go:629] Waited for 194.941824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:32:15.019875   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:32:15.019883   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:15.019891   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:15.019899   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:15.022596   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:15.022618   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:15.022631   33509 round_trippers.go:580]     Audit-Id: 0044457b-3083-4bce-a852-f5b2e3323ec0
	I0103 19:32:15.022640   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:15.022651   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:15.022663   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:15.022674   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:15.022688   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:15 GMT
	I0103 19:32:15.022841   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484895","namespace":"kube-system","uid":"f981e6c0-1f4a-44ed-b043-c69ef28b4fa5","resourceVersion":"841","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.mirror":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.seen":"2024-01-03T19:21:43.948372698Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0103 19:32:15.219579   33509 request.go:629] Waited for 196.408256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:15.219680   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:32:15.219691   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:15.219702   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:15.219717   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:15.223024   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:15.223043   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:15.223051   33509 round_trippers.go:580]     Audit-Id: 2594c918-7f6d-4a19-ad31-bf48699967fc
	I0103 19:32:15.223057   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:15.223062   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:15.223071   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:15.223083   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:15.223090   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:15 GMT
	I0103 19:32:15.223256   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0103 19:32:15.223647   33509 pod_ready.go:92] pod "kube-scheduler-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:32:15.223666   33509 pod_ready.go:81] duration metric: took 398.856395ms waiting for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:32:15.223679   33509 pod_ready.go:38] duration metric: took 7.5756002s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:32:15.223700   33509 api_server.go:52] waiting for apiserver process to appear ...
	I0103 19:32:15.223766   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:32:15.236258   33509 command_runner.go:130] > 1089
	I0103 19:32:15.236311   33509 api_server.go:72] duration metric: took 8.188392565s to wait for apiserver process to appear ...
	I0103 19:32:15.236323   33509 api_server.go:88] waiting for apiserver healthz status ...
	I0103 19:32:15.236344   33509 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:32:15.241354   33509 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I0103 19:32:15.241425   33509 round_trippers.go:463] GET https://192.168.39.191:8443/version
	I0103 19:32:15.241433   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:15.241440   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:15.241447   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:15.242609   33509 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:32:15.242625   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:15.242645   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:15.242655   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:15.242662   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:15.242667   33509 round_trippers.go:580]     Content-Length: 264
	I0103 19:32:15.242676   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:15 GMT
	I0103 19:32:15.242685   33509 round_trippers.go:580]     Audit-Id: a8495cad-4422-41c8-af0e-985261e6a468
	I0103 19:32:15.242690   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:15.242743   33509 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0103 19:32:15.242794   33509 api_server.go:141] control plane version: v1.28.4
	I0103 19:32:15.242814   33509 api_server.go:131] duration metric: took 6.484036ms to wait for apiserver health ...
	I0103 19:32:15.242823   33509 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 19:32:15.419297   33509 request.go:629] Waited for 176.399129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:32:15.419400   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:32:15.419408   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:15.419418   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:15.419427   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:15.424005   33509 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:32:15.424038   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:15.424052   33509 round_trippers.go:580]     Audit-Id: 6b351601-e5b6-40a3-a0fa-c7055f8c41ad
	I0103 19:32:15.424062   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:15.424071   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:15.424079   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:15.424093   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:15.424099   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:15 GMT
	I0103 19:32:15.425332   33509 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"833","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0103 19:32:15.428867   33509 system_pods.go:59] 12 kube-system pods found
	I0103 19:32:15.428897   33509 system_pods.go:61] "coredns-5dd5756b68-wzsqb" [9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa] Running
	I0103 19:32:15.428904   33509 system_pods.go:61] "etcd-multinode-484895" [2b5f9dc7-2d61-4968-9b9a-cfc029c9522b] Running
	I0103 19:32:15.428910   33509 system_pods.go:61] "kindnet-gqgk2" [8d4f9028-52ad-44dd-83be-0bb7cc590b7f] Running
	I0103 19:32:15.428919   33509 system_pods.go:61] "kindnet-lfkpk" [69692e6a-42a1-48d7-aec1-d192a3e793ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0103 19:32:15.428927   33509 system_pods.go:61] "kindnet-zt7zf" [410b1bf2-5e4a-4c3d-8cbb-4145b96b8e3e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0103 19:32:15.428932   33509 system_pods.go:61] "kube-apiserver-multinode-484895" [f9f36416-b761-4534-8e09-bc3c94813149] Running
	I0103 19:32:15.428938   33509 system_pods.go:61] "kube-controller-manager-multinode-484895" [a04de258-1f92-4ac7-8f30-18ad9ebb6d40] Running
	I0103 19:32:15.428941   33509 system_pods.go:61] "kube-proxy-k7jnm" [4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3] Running
	I0103 19:32:15.428945   33509 system_pods.go:61] "kube-proxy-strp6" [f16942b4-2697-4fd7-88f7-3699e16bff79] Running
	I0103 19:32:15.428949   33509 system_pods.go:61] "kube-proxy-tp9s2" [728b1db9-b145-4ad3-b366-7fd8306d7a2a] Running
	I0103 19:32:15.428956   33509 system_pods.go:61] "kube-scheduler-multinode-484895" [f981e6c0-1f4a-44ed-b043-c69ef28b4fa5] Running
	I0103 19:32:15.428959   33509 system_pods.go:61] "storage-provisioner" [82edd1c3-f361-4f86-8d59-8b89193d7a31] Running
	I0103 19:32:15.428965   33509 system_pods.go:74] duration metric: took 186.137751ms to wait for pod list to return data ...
	I0103 19:32:15.428985   33509 default_sa.go:34] waiting for default service account to be created ...
	I0103 19:32:15.619412   33509 request.go:629] Waited for 190.342214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/default/serviceaccounts
	I0103 19:32:15.619520   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/default/serviceaccounts
	I0103 19:32:15.619528   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:15.619535   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:15.619542   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:15.622423   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:32:15.622447   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:15.622458   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:15.622465   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:15.622472   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:15.622480   33509 round_trippers.go:580]     Content-Length: 261
	I0103 19:32:15.622538   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:15 GMT
	I0103 19:32:15.622551   33509 round_trippers.go:580]     Audit-Id: 0f244c41-f38b-4ce8-9c2c-605e778bbfce
	I0103 19:32:15.622560   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:15.622588   33509 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"47bf7b55-c706-4355-a436-e9ecf18d06f2","resourceVersion":"306","creationTimestamp":"2024-01-03T19:21:56Z"}}]}
	I0103 19:32:15.622770   33509 default_sa.go:45] found service account: "default"
	I0103 19:32:15.622789   33509 default_sa.go:55] duration metric: took 193.798367ms for default service account to be created ...
	I0103 19:32:15.622800   33509 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 19:32:15.819223   33509 request.go:629] Waited for 196.36319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:32:15.819301   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:32:15.819307   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:15.819318   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:15.819325   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:15.825025   33509 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0103 19:32:15.825054   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:15.825068   33509 round_trippers.go:580]     Audit-Id: 6b8ccc22-1728-4788-b68b-f88749bd1388
	I0103 19:32:15.825077   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:15.825085   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:15.825093   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:15.825101   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:15.825110   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:15 GMT
	I0103 19:32:15.827994   33509 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"833","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0103 19:32:15.831498   33509 system_pods.go:86] 12 kube-system pods found
	I0103 19:32:15.831527   33509 system_pods.go:89] "coredns-5dd5756b68-wzsqb" [9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa] Running
	I0103 19:32:15.831535   33509 system_pods.go:89] "etcd-multinode-484895" [2b5f9dc7-2d61-4968-9b9a-cfc029c9522b] Running
	I0103 19:32:15.831542   33509 system_pods.go:89] "kindnet-gqgk2" [8d4f9028-52ad-44dd-83be-0bb7cc590b7f] Running
	I0103 19:32:15.831555   33509 system_pods.go:89] "kindnet-lfkpk" [69692e6a-42a1-48d7-aec1-d192a3e793ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0103 19:32:15.831568   33509 system_pods.go:89] "kindnet-zt7zf" [410b1bf2-5e4a-4c3d-8cbb-4145b96b8e3e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0103 19:32:15.831579   33509 system_pods.go:89] "kube-apiserver-multinode-484895" [f9f36416-b761-4534-8e09-bc3c94813149] Running
	I0103 19:32:15.831591   33509 system_pods.go:89] "kube-controller-manager-multinode-484895" [a04de258-1f92-4ac7-8f30-18ad9ebb6d40] Running
	I0103 19:32:15.831600   33509 system_pods.go:89] "kube-proxy-k7jnm" [4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3] Running
	I0103 19:32:15.831610   33509 system_pods.go:89] "kube-proxy-strp6" [f16942b4-2697-4fd7-88f7-3699e16bff79] Running
	I0103 19:32:15.831619   33509 system_pods.go:89] "kube-proxy-tp9s2" [728b1db9-b145-4ad3-b366-7fd8306d7a2a] Running
	I0103 19:32:15.831626   33509 system_pods.go:89] "kube-scheduler-multinode-484895" [f981e6c0-1f4a-44ed-b043-c69ef28b4fa5] Running
	I0103 19:32:15.831635   33509 system_pods.go:89] "storage-provisioner" [82edd1c3-f361-4f86-8d59-8b89193d7a31] Running
	I0103 19:32:15.831644   33509 system_pods.go:126] duration metric: took 208.838067ms to wait for k8s-apps to be running ...
	I0103 19:32:15.831655   33509 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:32:15.831706   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:32:15.846929   33509 system_svc.go:56] duration metric: took 15.26418ms WaitForService to wait for kubelet.
	I0103 19:32:15.846956   33509 kubeadm.go:581] duration metric: took 8.799038036s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:32:15.846980   33509 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:32:16.019473   33509 request.go:629] Waited for 172.395935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes
	I0103 19:32:16.019534   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes
	I0103 19:32:16.019541   33509 round_trippers.go:469] Request Headers:
	I0103 19:32:16.019553   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:32:16.019563   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:32:16.023039   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:32:16.023066   33509 round_trippers.go:577] Response Headers:
	I0103 19:32:16.023078   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:32:16.023087   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:32:16 GMT
	I0103 19:32:16.023095   33509 round_trippers.go:580]     Audit-Id: d3975d8a-54e5-41c4-acc2-2efb1f3c431a
	I0103 19:32:16.023103   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:32:16.023114   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:32:16.023123   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:32:16.023321   33509 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"764","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I0103 19:32:16.023971   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:32:16.023995   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:32:16.024005   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:32:16.024009   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:32:16.024013   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:32:16.024024   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:32:16.024038   33509 node_conditions.go:105] duration metric: took 177.053644ms to run NodePressure ...
	I0103 19:32:16.024048   33509 start.go:228] waiting for startup goroutines ...
	I0103 19:32:16.024054   33509 start.go:233] waiting for cluster config update ...
	I0103 19:32:16.024061   33509 start.go:242] writing updated cluster config ...
	I0103 19:32:16.024466   33509 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:32:16.024542   33509 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:32:16.027450   33509 out.go:177] * Starting worker node multinode-484895-m02 in cluster multinode-484895
	I0103 19:32:16.028946   33509 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:32:16.028970   33509 cache.go:56] Caching tarball of preloaded images
	I0103 19:32:16.029072   33509 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:32:16.029084   33509 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:32:16.029178   33509 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:32:16.029345   33509 start.go:365] acquiring machines lock for multinode-484895-m02: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:32:16.029382   33509 start.go:369] acquired machines lock for "multinode-484895-m02" in 20.436µs
	I0103 19:32:16.029393   33509 start.go:96] Skipping create...Using existing machine configuration
	I0103 19:32:16.029400   33509 fix.go:54] fixHost starting: m02
	I0103 19:32:16.029660   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:32:16.029684   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:32:16.044339   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I0103 19:32:16.044741   33509 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:32:16.045163   33509 main.go:141] libmachine: Using API Version  1
	I0103 19:32:16.045185   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:32:16.045485   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:32:16.045651   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:32:16.045810   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetState
	I0103 19:32:16.047393   33509 fix.go:102] recreateIfNeeded on multinode-484895-m02: state=Running err=<nil>
	W0103 19:32:16.047409   33509 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 19:32:16.050622   33509 out.go:177] * Updating the running kvm2 "multinode-484895-m02" VM ...
	I0103 19:32:16.051971   33509 machine.go:88] provisioning docker machine ...
	I0103 19:32:16.051995   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:32:16.052201   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetMachineName
	I0103 19:32:16.052356   33509 buildroot.go:166] provisioning hostname "multinode-484895-m02"
	I0103 19:32:16.052371   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetMachineName
	I0103 19:32:16.052485   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:32:16.054870   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.055274   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:32:16.055307   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.055455   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:32:16.055661   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:32:16.055811   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:32:16.055937   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:32:16.056149   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:32:16.056523   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:32:16.056576   33509 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484895-m02 && echo "multinode-484895-m02" | sudo tee /etc/hostname
	I0103 19:32:16.184403   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484895-m02
	
	I0103 19:32:16.184431   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:32:16.187493   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.187861   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:32:16.187895   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.188119   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:32:16.188307   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:32:16.188475   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:32:16.188630   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:32:16.188798   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:32:16.189174   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:32:16.189204   33509 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484895-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484895-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484895-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:32:16.303584   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:32:16.303614   33509 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 19:32:16.303637   33509 buildroot.go:174] setting up certificates
	I0103 19:32:16.303647   33509 provision.go:83] configureAuth start
	I0103 19:32:16.303664   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetMachineName
	I0103 19:32:16.303959   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetIP
	I0103 19:32:16.307321   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.307702   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:32:16.307747   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.307882   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:32:16.310363   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.310784   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:32:16.310805   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.310945   33509 provision.go:138] copyHostCerts
	I0103 19:32:16.310976   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:32:16.311004   33509 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 19:32:16.311014   33509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:32:16.311079   33509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 19:32:16.311157   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:32:16.311176   33509 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 19:32:16.311181   33509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:32:16.311216   33509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 19:32:16.311275   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:32:16.311301   33509 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 19:32:16.311309   33509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:32:16.311332   33509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 19:32:16.311401   33509 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.multinode-484895-m02 san=[192.168.39.86 192.168.39.86 localhost 127.0.0.1 minikube multinode-484895-m02]
	I0103 19:32:16.420811   33509 provision.go:172] copyRemoteCerts
	I0103 19:32:16.420865   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:32:16.420886   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:32:16.423939   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.424456   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:32:16.424495   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.424683   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:32:16.424897   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:32:16.425081   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:32:16.425233   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa Username:docker}
	I0103 19:32:16.516478   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 19:32:16.516568   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:32:16.539839   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 19:32:16.539908   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0103 19:32:16.563038   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 19:32:16.563130   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 19:32:16.585520   33509 provision.go:86] duration metric: configureAuth took 281.857351ms
	I0103 19:32:16.585543   33509 buildroot.go:189] setting minikube options for container-runtime
	I0103 19:32:16.585761   33509 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:32:16.585832   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:32:16.588209   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.588516   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:32:16.588550   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:32:16.588683   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:32:16.588860   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:32:16.589031   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:32:16.589182   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:32:16.589357   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:32:16.589692   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:32:16.589713   33509 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:33:47.278866   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:33:47.278899   33509 machine.go:91] provisioned docker machine in 1m31.226909862s
	I0103 19:33:47.278917   33509 start.go:300] post-start starting for "multinode-484895-m02" (driver="kvm2")
	I0103 19:33:47.278931   33509 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:33:47.278953   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:33:47.279306   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:33:47.279335   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:33:47.282001   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.282434   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:33:47.282462   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.282681   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:33:47.282880   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:33:47.283027   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:33:47.283151   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa Username:docker}
	I0103 19:33:47.374206   33509 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:33:47.378753   33509 command_runner.go:130] > NAME=Buildroot
	I0103 19:33:47.378774   33509 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0103 19:33:47.378778   33509 command_runner.go:130] > ID=buildroot
	I0103 19:33:47.378783   33509 command_runner.go:130] > VERSION_ID=2021.02.12
	I0103 19:33:47.378788   33509 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0103 19:33:47.378819   33509 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 19:33:47.378828   33509 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 19:33:47.378886   33509 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 19:33:47.378950   33509 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 19:33:47.378959   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0103 19:33:47.379042   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:33:47.388532   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:33:47.411165   33509 start.go:303] post-start completed in 132.23075ms
	I0103 19:33:47.411202   33509 fix.go:56] fixHost completed within 1m31.381799711s
	I0103 19:33:47.411227   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:33:47.414130   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.414488   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:33:47.414529   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.414737   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:33:47.414952   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:33:47.415110   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:33:47.415286   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:33:47.415465   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:33:47.415775   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0103 19:33:47.415786   33509 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 19:33:47.531164   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704310427.520613435
	
	I0103 19:33:47.531193   33509 fix.go:206] guest clock: 1704310427.520613435
	I0103 19:33:47.531202   33509 fix.go:219] Guest: 2024-01-03 19:33:47.520613435 +0000 UTC Remote: 2024-01-03 19:33:47.411207314 +0000 UTC m=+447.980893677 (delta=109.406121ms)
	I0103 19:33:47.531216   33509 fix.go:190] guest clock delta is within tolerance: 109.406121ms
	I0103 19:33:47.531222   33509 start.go:83] releasing machines lock for "multinode-484895-m02", held for 1m31.501832337s
	I0103 19:33:47.531248   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:33:47.531562   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetIP
	I0103 19:33:47.534679   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.535078   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:33:47.535109   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.537184   33509 out.go:177] * Found network options:
	I0103 19:33:47.538913   33509 out.go:177]   - NO_PROXY=192.168.39.191
	W0103 19:33:47.540511   33509 proxy.go:119] fail to check proxy env: Error ip not in block
	I0103 19:33:47.540554   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:33:47.541231   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:33:47.541470   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:33:47.541564   33509 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:33:47.541623   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	W0103 19:33:47.541731   33509 proxy.go:119] fail to check proxy env: Error ip not in block
	I0103 19:33:47.541804   33509 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:33:47.541828   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:33:47.544635   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.545116   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.545157   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:33:47.545157   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:33:47.545184   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.545287   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:33:47.545319   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:47.545334   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:33:47.545482   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:33:47.545594   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:33:47.545661   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:33:47.545763   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa Username:docker}
	I0103 19:33:47.545800   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:33:47.545908   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa Username:docker}
	I0103 19:33:47.665800   33509 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0103 19:33:47.780970   33509 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:33:47.786388   33509 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0103 19:33:47.786431   33509 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 19:33:47.786488   33509 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:33:47.794465   33509 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0103 19:33:47.794483   33509 start.go:475] detecting cgroup driver to use...
	I0103 19:33:47.794563   33509 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:33:47.808430   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:33:47.821133   33509 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:33:47.821182   33509 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:33:47.833299   33509 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:33:47.845311   33509 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:33:47.984321   33509 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:33:48.116289   33509 docker.go:219] disabling docker service ...
	I0103 19:33:48.116359   33509 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:33:48.129639   33509 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:33:48.141967   33509 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:33:48.267218   33509 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:33:48.393227   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:33:48.405391   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:33:48.423819   33509 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0103 19:33:48.423866   33509 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 19:33:48.423924   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:33:48.434472   33509 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:33:48.434555   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:33:48.443622   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:33:48.452889   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:33:48.462166   33509 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:33:48.472017   33509 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:33:48.480405   33509 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0103 19:33:48.480475   33509 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:33:48.488690   33509 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:33:48.609751   33509 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:33:54.034891   33509 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.425098968s)
	I0103 19:33:54.034918   33509 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:33:54.034969   33509 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:33:54.040122   33509 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0103 19:33:54.040140   33509 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0103 19:33:54.040147   33509 command_runner.go:130] > Device: 16h/22d	Inode: 1238        Links: 1
	I0103 19:33:54.040154   33509 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:33:54.040159   33509 command_runner.go:130] > Access: 2024-01-03 19:33:53.955406935 +0000
	I0103 19:33:54.040165   33509 command_runner.go:130] > Modify: 2024-01-03 19:33:53.955406935 +0000
	I0103 19:33:54.040171   33509 command_runner.go:130] > Change: 2024-01-03 19:33:53.955406935 +0000
	I0103 19:33:54.040178   33509 command_runner.go:130] >  Birth: -
	I0103 19:33:54.040481   33509 start.go:543] Will wait 60s for crictl version
	I0103 19:33:54.040531   33509 ssh_runner.go:195] Run: which crictl
	I0103 19:33:54.044224   33509 command_runner.go:130] > /usr/bin/crictl
	I0103 19:33:54.044271   33509 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:33:54.088759   33509 command_runner.go:130] > Version:  0.1.0
	I0103 19:33:54.088851   33509 command_runner.go:130] > RuntimeName:  cri-o
	I0103 19:33:54.088895   33509 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0103 19:33:54.088978   33509 command_runner.go:130] > RuntimeApiVersion:  v1
	I0103 19:33:54.090350   33509 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 19:33:54.090416   33509 ssh_runner.go:195] Run: crio --version
	I0103 19:33:54.133817   33509 command_runner.go:130] > crio version 1.24.1
	I0103 19:33:54.133843   33509 command_runner.go:130] > Version:          1.24.1
	I0103 19:33:54.133853   33509 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:33:54.133860   33509 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:33:54.133870   33509 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:33:54.133881   33509 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:33:54.133885   33509 command_runner.go:130] > Compiler:         gc
	I0103 19:33:54.133890   33509 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:33:54.133895   33509 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:33:54.133902   33509 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:33:54.133906   33509 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:33:54.133911   33509 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:33:54.134003   33509 ssh_runner.go:195] Run: crio --version
	I0103 19:33:54.179874   33509 command_runner.go:130] > crio version 1.24.1
	I0103 19:33:54.179902   33509 command_runner.go:130] > Version:          1.24.1
	I0103 19:33:54.179914   33509 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:33:54.179920   33509 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:33:54.179930   33509 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:33:54.179937   33509 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:33:54.179943   33509 command_runner.go:130] > Compiler:         gc
	I0103 19:33:54.179952   33509 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:33:54.179959   33509 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:33:54.179969   33509 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:33:54.179974   33509 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:33:54.179994   33509 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:33:54.182923   33509 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 19:33:54.184461   33509 out.go:177]   - env NO_PROXY=192.168.39.191
	I0103 19:33:54.186038   33509 main.go:141] libmachine: (multinode-484895-m02) Calling .GetIP
	I0103 19:33:54.189059   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:54.189485   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:33:54.189527   33509 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:33:54.189756   33509 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 19:33:54.194304   33509 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0103 19:33:54.194355   33509 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895 for IP: 192.168.39.86
	I0103 19:33:54.194374   33509 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:33:54.194549   33509 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 19:33:54.194606   33509 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 19:33:54.194624   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 19:33:54.194644   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 19:33:54.194662   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 19:33:54.194679   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 19:33:54.194740   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 19:33:54.194768   33509 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 19:33:54.194779   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:33:54.194799   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:33:54.194822   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:33:54.194845   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 19:33:54.194882   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:33:54.194909   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:33:54.194922   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0103 19:33:54.194934   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0103 19:33:54.195247   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:33:54.218015   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 19:33:54.239951   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:33:54.261252   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 19:33:54.283067   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:33:54.304774   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 19:33:54.326023   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 19:33:54.346779   33509 ssh_runner.go:195] Run: openssl version
	I0103 19:33:54.352147   33509 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0103 19:33:54.352221   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:33:54.362525   33509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:33:54.366762   33509 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:33:54.366939   33509 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:33:54.366979   33509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:33:54.372084   33509 command_runner.go:130] > b5213941
	I0103 19:33:54.372126   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:33:54.380738   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 19:33:54.390932   33509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 19:33:54.395194   33509 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:33:54.395353   33509 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:33:54.395411   33509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 19:33:54.400507   33509 command_runner.go:130] > 51391683
	I0103 19:33:54.400769   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 19:33:54.409903   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 19:33:54.420337   33509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 19:33:54.424749   33509 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:33:54.424813   33509 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:33:54.424865   33509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 19:33:54.430074   33509 command_runner.go:130] > 3ec20f2e
	I0103 19:33:54.430511   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:33:54.439219   33509 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:33:54.443346   33509 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:33:54.443519   33509 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:33:54.443619   33509 ssh_runner.go:195] Run: crio config
	I0103 19:33:54.504396   33509 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0103 19:33:54.504424   33509 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0103 19:33:54.504433   33509 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0103 19:33:54.504439   33509 command_runner.go:130] > #
	I0103 19:33:54.504450   33509 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0103 19:33:54.504459   33509 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0103 19:33:54.504468   33509 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0103 19:33:54.504484   33509 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0103 19:33:54.504495   33509 command_runner.go:130] > # reload'.
	I0103 19:33:54.504505   33509 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0103 19:33:54.504516   33509 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0103 19:33:54.504526   33509 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0103 19:33:54.504532   33509 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0103 19:33:54.504542   33509 command_runner.go:130] > [crio]
	I0103 19:33:54.504552   33509 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0103 19:33:54.504562   33509 command_runner.go:130] > # containers images, in this directory.
	I0103 19:33:54.504574   33509 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0103 19:33:54.504587   33509 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0103 19:33:54.504598   33509 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0103 19:33:54.504609   33509 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0103 19:33:54.504623   33509 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0103 19:33:54.504635   33509 command_runner.go:130] > storage_driver = "overlay"
	I0103 19:33:54.504648   33509 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0103 19:33:54.504670   33509 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0103 19:33:54.504680   33509 command_runner.go:130] > storage_option = [
	I0103 19:33:54.504690   33509 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0103 19:33:54.504699   33509 command_runner.go:130] > ]
	I0103 19:33:54.504709   33509 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0103 19:33:54.504721   33509 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0103 19:33:54.504734   33509 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0103 19:33:54.504747   33509 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0103 19:33:54.504760   33509 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0103 19:33:54.504774   33509 command_runner.go:130] > # always happen on a node reboot
	I0103 19:33:54.504782   33509 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0103 19:33:54.504795   33509 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0103 19:33:54.504808   33509 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0103 19:33:54.504825   33509 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0103 19:33:54.504838   33509 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0103 19:33:54.504853   33509 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0103 19:33:54.504868   33509 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0103 19:33:54.504875   33509 command_runner.go:130] > # internal_wipe = true
	I0103 19:33:54.504888   33509 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0103 19:33:54.504901   33509 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0103 19:33:54.504914   33509 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0103 19:33:54.504926   33509 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0103 19:33:54.504936   33509 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0103 19:33:54.504945   33509 command_runner.go:130] > [crio.api]
	I0103 19:33:54.504954   33509 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0103 19:33:54.504966   33509 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0103 19:33:54.504983   33509 command_runner.go:130] > # IP address on which the stream server will listen.
	I0103 19:33:54.504998   33509 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0103 19:33:54.505013   33509 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0103 19:33:54.505025   33509 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0103 19:33:54.505033   33509 command_runner.go:130] > # stream_port = "0"
	I0103 19:33:54.505043   33509 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0103 19:33:54.505054   33509 command_runner.go:130] > # stream_enable_tls = false
	I0103 19:33:54.505068   33509 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0103 19:33:54.505078   33509 command_runner.go:130] > # stream_idle_timeout = ""
	I0103 19:33:54.505091   33509 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0103 19:33:54.505105   33509 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0103 19:33:54.505115   33509 command_runner.go:130] > # minutes.
	I0103 19:33:54.505125   33509 command_runner.go:130] > # stream_tls_cert = ""
	I0103 19:33:54.505138   33509 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0103 19:33:54.505148   33509 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0103 19:33:54.505158   33509 command_runner.go:130] > # stream_tls_key = ""
	I0103 19:33:54.505168   33509 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0103 19:33:54.505182   33509 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0103 19:33:54.505191   33509 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0103 19:33:54.505202   33509 command_runner.go:130] > # stream_tls_ca = ""
	I0103 19:33:54.505214   33509 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:33:54.505224   33509 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0103 19:33:54.505240   33509 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:33:54.505251   33509 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0103 19:33:54.505269   33509 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0103 19:33:54.505285   33509 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0103 19:33:54.505292   33509 command_runner.go:130] > [crio.runtime]
	I0103 19:33:54.505302   33509 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0103 19:33:54.505312   33509 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0103 19:33:54.505319   33509 command_runner.go:130] > # "nofile=1024:2048"
	I0103 19:33:54.505329   33509 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0103 19:33:54.505336   33509 command_runner.go:130] > # default_ulimits = [
	I0103 19:33:54.505342   33509 command_runner.go:130] > # ]
	I0103 19:33:54.505355   33509 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0103 19:33:54.505362   33509 command_runner.go:130] > # no_pivot = false
	I0103 19:33:54.505376   33509 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0103 19:33:54.505387   33509 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0103 19:33:54.505398   33509 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0103 19:33:54.505409   33509 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0103 19:33:54.505420   33509 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0103 19:33:54.505432   33509 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:33:54.505442   33509 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0103 19:33:54.505448   33509 command_runner.go:130] > # Cgroup setting for conmon
	I0103 19:33:54.505457   33509 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0103 19:33:54.505461   33509 command_runner.go:130] > conmon_cgroup = "pod"
	I0103 19:33:54.505467   33509 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0103 19:33:54.505472   33509 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0103 19:33:54.505480   33509 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:33:54.505487   33509 command_runner.go:130] > conmon_env = [
	I0103 19:33:54.505497   33509 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0103 19:33:54.505505   33509 command_runner.go:130] > ]
	I0103 19:33:54.505515   33509 command_runner.go:130] > # Additional environment variables to set for all the
	I0103 19:33:54.505527   33509 command_runner.go:130] > # containers. These are overridden if set in the
	I0103 19:33:54.505537   33509 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0103 19:33:54.505547   33509 command_runner.go:130] > # default_env = [
	I0103 19:33:54.505553   33509 command_runner.go:130] > # ]
	I0103 19:33:54.505566   33509 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0103 19:33:54.505575   33509 command_runner.go:130] > # selinux = false
	I0103 19:33:54.505587   33509 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0103 19:33:54.505602   33509 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0103 19:33:54.505614   33509 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0103 19:33:54.505625   33509 command_runner.go:130] > # seccomp_profile = ""
	I0103 19:33:54.505636   33509 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0103 19:33:54.505648   33509 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0103 19:33:54.505667   33509 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0103 19:33:54.505678   33509 command_runner.go:130] > # which might increase security.
	I0103 19:33:54.505690   33509 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0103 19:33:54.505704   33509 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0103 19:33:54.505718   33509 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0103 19:33:54.505731   33509 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0103 19:33:54.505744   33509 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0103 19:33:54.505753   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:33:54.505764   33509 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0103 19:33:54.505776   33509 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0103 19:33:54.505785   33509 command_runner.go:130] > # the cgroup blockio controller.
	I0103 19:33:54.505796   33509 command_runner.go:130] > # blockio_config_file = ""
	I0103 19:33:54.505809   33509 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0103 19:33:54.505819   33509 command_runner.go:130] > # irqbalance daemon.
	I0103 19:33:54.505829   33509 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0103 19:33:54.505843   33509 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0103 19:33:54.505856   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:33:54.505866   33509 command_runner.go:130] > # rdt_config_file = ""
	I0103 19:33:54.505880   33509 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0103 19:33:54.505891   33509 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0103 19:33:54.505905   33509 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0103 19:33:54.505915   33509 command_runner.go:130] > # separate_pull_cgroup = ""
	I0103 19:33:54.505926   33509 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0103 19:33:54.505939   33509 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0103 19:33:54.505948   33509 command_runner.go:130] > # will be added.
	I0103 19:33:54.505953   33509 command_runner.go:130] > # default_capabilities = [
	I0103 19:33:54.505957   33509 command_runner.go:130] > # 	"CHOWN",
	I0103 19:33:54.505966   33509 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0103 19:33:54.505973   33509 command_runner.go:130] > # 	"FSETID",
	I0103 19:33:54.505982   33509 command_runner.go:130] > # 	"FOWNER",
	I0103 19:33:54.505988   33509 command_runner.go:130] > # 	"SETGID",
	I0103 19:33:54.505998   33509 command_runner.go:130] > # 	"SETUID",
	I0103 19:33:54.506005   33509 command_runner.go:130] > # 	"SETPCAP",
	I0103 19:33:54.506016   33509 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0103 19:33:54.506024   33509 command_runner.go:130] > # 	"KILL",
	I0103 19:33:54.506033   33509 command_runner.go:130] > # ]
	I0103 19:33:54.506043   33509 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0103 19:33:54.506056   33509 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:33:54.506066   33509 command_runner.go:130] > # default_sysctls = [
	I0103 19:33:54.506070   33509 command_runner.go:130] > # ]
	I0103 19:33:54.506082   33509 command_runner.go:130] > # List of devices on the host that a
	I0103 19:33:54.506096   33509 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0103 19:33:54.506107   33509 command_runner.go:130] > # allowed_devices = [
	I0103 19:33:54.506118   33509 command_runner.go:130] > # 	"/dev/fuse",
	I0103 19:33:54.506124   33509 command_runner.go:130] > # ]
	I0103 19:33:54.506136   33509 command_runner.go:130] > # List of additional devices. specified as
	I0103 19:33:54.506150   33509 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0103 19:33:54.506162   33509 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0103 19:33:54.506184   33509 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:33:54.506194   33509 command_runner.go:130] > # additional_devices = [
	I0103 19:33:54.506200   33509 command_runner.go:130] > # ]
	I0103 19:33:54.506210   33509 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0103 19:33:54.506220   33509 command_runner.go:130] > # cdi_spec_dirs = [
	I0103 19:33:54.506226   33509 command_runner.go:130] > # 	"/etc/cdi",
	I0103 19:33:54.506236   33509 command_runner.go:130] > # 	"/var/run/cdi",
	I0103 19:33:54.506242   33509 command_runner.go:130] > # ]
	I0103 19:33:54.506256   33509 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0103 19:33:54.506268   33509 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0103 19:33:54.506278   33509 command_runner.go:130] > # Defaults to false.
	I0103 19:33:54.506287   33509 command_runner.go:130] > # device_ownership_from_security_context = false
	I0103 19:33:54.506300   33509 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0103 19:33:54.506314   33509 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0103 19:33:54.506324   33509 command_runner.go:130] > # hooks_dir = [
	I0103 19:33:54.506332   33509 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0103 19:33:54.506342   33509 command_runner.go:130] > # ]
	I0103 19:33:54.506352   33509 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0103 19:33:54.506365   33509 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0103 19:33:54.506378   33509 command_runner.go:130] > # its default mounts from the following two files:
	I0103 19:33:54.506386   33509 command_runner.go:130] > #
	I0103 19:33:54.506398   33509 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0103 19:33:54.506411   33509 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0103 19:33:54.506423   33509 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0103 19:33:54.506432   33509 command_runner.go:130] > #
	I0103 19:33:54.506442   33509 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0103 19:33:54.506453   33509 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0103 19:33:54.506464   33509 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0103 19:33:54.506476   33509 command_runner.go:130] > #      only add mounts it finds in this file.
	I0103 19:33:54.506485   33509 command_runner.go:130] > #
	I0103 19:33:54.506492   33509 command_runner.go:130] > # default_mounts_file = ""
	I0103 19:33:54.506504   33509 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0103 19:33:54.506515   33509 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0103 19:33:54.506536   33509 command_runner.go:130] > pids_limit = 1024
	I0103 19:33:54.506547   33509 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0103 19:33:54.506559   33509 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0103 19:33:54.506574   33509 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0103 19:33:54.506588   33509 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0103 19:33:54.506594   33509 command_runner.go:130] > # log_size_max = -1
	I0103 19:33:54.506606   33509 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0103 19:33:54.506618   33509 command_runner.go:130] > # log_to_journald = false
	I0103 19:33:54.506630   33509 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0103 19:33:54.506643   33509 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0103 19:33:54.506657   33509 command_runner.go:130] > # Path to directory for container attach sockets.
	I0103 19:33:54.506673   33509 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0103 19:33:54.506685   33509 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0103 19:33:54.506696   33509 command_runner.go:130] > # bind_mount_prefix = ""
	I0103 19:33:54.506709   33509 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0103 19:33:54.506716   33509 command_runner.go:130] > # read_only = false
	I0103 19:33:54.506730   33509 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0103 19:33:54.506743   33509 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0103 19:33:54.506753   33509 command_runner.go:130] > # live configuration reload.
	I0103 19:33:54.506760   33509 command_runner.go:130] > # log_level = "info"
	I0103 19:33:54.506769   33509 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0103 19:33:54.506776   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:33:54.506786   33509 command_runner.go:130] > # log_filter = ""
	I0103 19:33:54.506799   33509 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0103 19:33:54.506813   33509 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0103 19:33:54.506820   33509 command_runner.go:130] > # separated by comma.
	I0103 19:33:54.506830   33509 command_runner.go:130] > # uid_mappings = ""
	I0103 19:33:54.506842   33509 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0103 19:33:54.506854   33509 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0103 19:33:54.506865   33509 command_runner.go:130] > # separated by comma.
	I0103 19:33:54.506877   33509 command_runner.go:130] > # gid_mappings = ""
	I0103 19:33:54.506891   33509 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0103 19:33:54.506904   33509 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:33:54.506917   33509 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:33:54.506924   33509 command_runner.go:130] > # minimum_mappable_uid = -1
	I0103 19:33:54.506934   33509 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0103 19:33:54.506943   33509 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:33:54.506957   33509 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:33:54.506968   33509 command_runner.go:130] > # minimum_mappable_gid = -1
	I0103 19:33:54.506980   33509 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0103 19:33:54.506993   33509 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0103 19:33:54.507005   33509 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0103 19:33:54.507013   33509 command_runner.go:130] > # ctr_stop_timeout = 30
	I0103 19:33:54.507022   33509 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0103 19:33:54.507030   33509 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0103 19:33:54.507042   33509 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0103 19:33:54.507054   33509 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0103 19:33:54.507062   33509 command_runner.go:130] > drop_infra_ctr = false
	I0103 19:33:54.507075   33509 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0103 19:33:54.507087   33509 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0103 19:33:54.507101   33509 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0103 19:33:54.507107   33509 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0103 19:33:54.507115   33509 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0103 19:33:54.507128   33509 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0103 19:33:54.507139   33509 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0103 19:33:54.507153   33509 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0103 19:33:54.507165   33509 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0103 19:33:54.507178   33509 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0103 19:33:54.507190   33509 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0103 19:33:54.507196   33509 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0103 19:33:54.507206   33509 command_runner.go:130] > # default_runtime = "runc"
	I0103 19:33:54.507215   33509 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0103 19:33:54.507230   33509 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0103 19:33:54.507247   33509 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0103 19:33:54.507258   33509 command_runner.go:130] > # creation as a file is not desired either.
	I0103 19:33:54.507274   33509 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0103 19:33:54.507282   33509 command_runner.go:130] > # the hostname is being managed dynamically.
	I0103 19:33:54.507288   33509 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0103 19:33:54.507294   33509 command_runner.go:130] > # ]
	I0103 19:33:54.507308   33509 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0103 19:33:54.507321   33509 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0103 19:33:54.507334   33509 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0103 19:33:54.507347   33509 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0103 19:33:54.507356   33509 command_runner.go:130] > #
	I0103 19:33:54.507362   33509 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0103 19:33:54.507370   33509 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0103 19:33:54.507377   33509 command_runner.go:130] > #  runtime_type = "oci"
	I0103 19:33:54.507389   33509 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0103 19:33:54.507401   33509 command_runner.go:130] > #  privileged_without_host_devices = false
	I0103 19:33:54.507411   33509 command_runner.go:130] > #  allowed_annotations = []
	I0103 19:33:54.507421   33509 command_runner.go:130] > # Where:
	I0103 19:33:54.507429   33509 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0103 19:33:54.507442   33509 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0103 19:33:54.507452   33509 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0103 19:33:54.507459   33509 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0103 19:33:54.507471   33509 command_runner.go:130] > #   in $PATH.
	I0103 19:33:54.507483   33509 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0103 19:33:54.507490   33509 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0103 19:33:54.507501   33509 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0103 19:33:54.507508   33509 command_runner.go:130] > #   state.
	I0103 19:33:54.507518   33509 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0103 19:33:54.507533   33509 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0103 19:33:54.507539   33509 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0103 19:33:54.507551   33509 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0103 19:33:54.507565   33509 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0103 19:33:54.507579   33509 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0103 19:33:54.507591   33509 command_runner.go:130] > #   The currently recognized values are:
	I0103 19:33:54.507604   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0103 19:33:54.507619   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0103 19:33:54.507628   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0103 19:33:54.507641   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0103 19:33:54.507658   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0103 19:33:54.507674   33509 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0103 19:33:54.507688   33509 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0103 19:33:54.507701   33509 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0103 19:33:54.507710   33509 command_runner.go:130] > #   should be moved to the container's cgroup
	I0103 19:33:54.507718   33509 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0103 19:33:54.507729   33509 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0103 19:33:54.507740   33509 command_runner.go:130] > runtime_type = "oci"
	I0103 19:33:54.507751   33509 command_runner.go:130] > runtime_root = "/run/runc"
	I0103 19:33:54.507761   33509 command_runner.go:130] > runtime_config_path = ""
	I0103 19:33:54.507771   33509 command_runner.go:130] > monitor_path = ""
	I0103 19:33:54.507778   33509 command_runner.go:130] > monitor_cgroup = ""
	I0103 19:33:54.507788   33509 command_runner.go:130] > monitor_exec_cgroup = ""
	I0103 19:33:54.507798   33509 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0103 19:33:54.507807   33509 command_runner.go:130] > # running containers
	I0103 19:33:54.507818   33509 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0103 19:33:54.507832   33509 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0103 19:33:54.507877   33509 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0103 19:33:54.507887   33509 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0103 19:33:54.507895   33509 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0103 19:33:54.507904   33509 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0103 19:33:54.507916   33509 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0103 19:33:54.507927   33509 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0103 19:33:54.507938   33509 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0103 19:33:54.507949   33509 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0103 19:33:54.507962   33509 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0103 19:33:54.507970   33509 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0103 19:33:54.507983   33509 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0103 19:33:54.507998   33509 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0103 19:33:54.508014   33509 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0103 19:33:54.508027   33509 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0103 19:33:54.508044   33509 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0103 19:33:54.508055   33509 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0103 19:33:54.508067   33509 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0103 19:33:54.508083   33509 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0103 19:33:54.508092   33509 command_runner.go:130] > # Example:
	I0103 19:33:54.508103   33509 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0103 19:33:54.508115   33509 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0103 19:33:54.508126   33509 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0103 19:33:54.508137   33509 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0103 19:33:54.508143   33509 command_runner.go:130] > # cpuset = 0
	I0103 19:33:54.508149   33509 command_runner.go:130] > # cpushares = "0-1"
	I0103 19:33:54.508158   33509 command_runner.go:130] > # Where:
	I0103 19:33:54.508170   33509 command_runner.go:130] > # The workload name is workload-type.
	I0103 19:33:54.508184   33509 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0103 19:33:54.508196   33509 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0103 19:33:54.508209   33509 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0103 19:33:54.508224   33509 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0103 19:33:54.508233   33509 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0103 19:33:54.508240   33509 command_runner.go:130] > # 
	I0103 19:33:54.508254   33509 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0103 19:33:54.508263   33509 command_runner.go:130] > #
	I0103 19:33:54.508277   33509 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0103 19:33:54.508291   33509 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0103 19:33:54.508304   33509 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0103 19:33:54.508316   33509 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0103 19:33:54.508321   33509 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0103 19:33:54.508327   33509 command_runner.go:130] > [crio.image]
	I0103 19:33:54.508337   33509 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0103 19:33:54.508346   33509 command_runner.go:130] > # default_transport = "docker://"
	I0103 19:33:54.508356   33509 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0103 19:33:54.508367   33509 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:33:54.508379   33509 command_runner.go:130] > # global_auth_file = ""
	I0103 19:33:54.508388   33509 command_runner.go:130] > # The image used to instantiate infra containers.
	I0103 19:33:54.508400   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:33:54.508410   33509 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0103 19:33:54.508424   33509 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0103 19:33:54.508437   33509 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:33:54.508451   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:33:54.508462   33509 command_runner.go:130] > # pause_image_auth_file = ""
	I0103 19:33:54.508472   33509 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0103 19:33:54.508481   33509 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0103 19:33:54.508491   33509 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0103 19:33:54.508503   33509 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0103 19:33:54.508510   33509 command_runner.go:130] > # pause_command = "/pause"
	I0103 19:33:54.508524   33509 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0103 19:33:54.508537   33509 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0103 19:33:54.508549   33509 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0103 19:33:54.508563   33509 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0103 19:33:54.508575   33509 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0103 19:33:54.508586   33509 command_runner.go:130] > # signature_policy = ""
	I0103 19:33:54.508600   33509 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0103 19:33:54.508610   33509 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0103 19:33:54.508620   33509 command_runner.go:130] > # changing them here.
	I0103 19:33:54.508631   33509 command_runner.go:130] > # insecure_registries = [
	I0103 19:33:54.508641   33509 command_runner.go:130] > # ]
	I0103 19:33:54.508655   33509 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0103 19:33:54.508672   33509 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0103 19:33:54.508683   33509 command_runner.go:130] > # image_volumes = "mkdir"
	I0103 19:33:54.508695   33509 command_runner.go:130] > # Temporary directory to use for storing big files
	I0103 19:33:54.508703   33509 command_runner.go:130] > # big_files_temporary_dir = ""
	I0103 19:33:54.508717   33509 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0103 19:33:54.508726   33509 command_runner.go:130] > # CNI plugins.
	I0103 19:33:54.508733   33509 command_runner.go:130] > [crio.network]
	I0103 19:33:54.508746   33509 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0103 19:33:54.508758   33509 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0103 19:33:54.508766   33509 command_runner.go:130] > # cni_default_network = ""
	I0103 19:33:54.508778   33509 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0103 19:33:54.508789   33509 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0103 19:33:54.508797   33509 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0103 19:33:54.508806   33509 command_runner.go:130] > # plugin_dirs = [
	I0103 19:33:54.508813   33509 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0103 19:33:54.508822   33509 command_runner.go:130] > # ]
	I0103 19:33:54.508837   33509 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0103 19:33:54.508846   33509 command_runner.go:130] > [crio.metrics]
	I0103 19:33:54.508853   33509 command_runner.go:130] > # Globally enable or disable metrics support.
	I0103 19:33:54.508863   33509 command_runner.go:130] > enable_metrics = true
	I0103 19:33:54.508870   33509 command_runner.go:130] > # Specify enabled metrics collectors.
	I0103 19:33:54.508880   33509 command_runner.go:130] > # Per default all metrics are enabled.
	I0103 19:33:54.508892   33509 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0103 19:33:54.508905   33509 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0103 19:33:54.508917   33509 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0103 19:33:54.508927   33509 command_runner.go:130] > # metrics_collectors = [
	I0103 19:33:54.508936   33509 command_runner.go:130] > # 	"operations",
	I0103 19:33:54.508947   33509 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0103 19:33:54.508958   33509 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0103 19:33:54.508968   33509 command_runner.go:130] > # 	"operations_errors",
	I0103 19:33:54.508978   33509 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0103 19:33:54.508985   33509 command_runner.go:130] > # 	"image_pulls_by_name",
	I0103 19:33:54.508990   33509 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0103 19:33:54.508996   33509 command_runner.go:130] > # 	"image_pulls_failures",
	I0103 19:33:54.509002   33509 command_runner.go:130] > # 	"image_pulls_successes",
	I0103 19:33:54.509009   33509 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0103 19:33:54.509013   33509 command_runner.go:130] > # 	"image_layer_reuse",
	I0103 19:33:54.509020   33509 command_runner.go:130] > # 	"containers_oom_total",
	I0103 19:33:54.509024   33509 command_runner.go:130] > # 	"containers_oom",
	I0103 19:33:54.509031   33509 command_runner.go:130] > # 	"processes_defunct",
	I0103 19:33:54.509035   33509 command_runner.go:130] > # 	"operations_total",
	I0103 19:33:54.509041   33509 command_runner.go:130] > # 	"operations_latency_seconds",
	I0103 19:33:54.509046   33509 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0103 19:33:54.509053   33509 command_runner.go:130] > # 	"operations_errors_total",
	I0103 19:33:54.509058   33509 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0103 19:33:54.509065   33509 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0103 19:33:54.509070   33509 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0103 19:33:54.509077   33509 command_runner.go:130] > # 	"image_pulls_success_total",
	I0103 19:33:54.509082   33509 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0103 19:33:54.509088   33509 command_runner.go:130] > # 	"containers_oom_count_total",
	I0103 19:33:54.509092   33509 command_runner.go:130] > # ]
	I0103 19:33:54.509099   33509 command_runner.go:130] > # The port on which the metrics server will listen.
	I0103 19:33:54.509104   33509 command_runner.go:130] > # metrics_port = 9090
	I0103 19:33:54.509111   33509 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0103 19:33:54.509118   33509 command_runner.go:130] > # metrics_socket = ""
	I0103 19:33:54.509123   33509 command_runner.go:130] > # The certificate for the secure metrics server.
	I0103 19:33:54.509131   33509 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0103 19:33:54.509139   33509 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0103 19:33:54.509146   33509 command_runner.go:130] > # certificate on any modification event.
	I0103 19:33:54.509150   33509 command_runner.go:130] > # metrics_cert = ""
	I0103 19:33:54.509155   33509 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0103 19:33:54.509162   33509 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0103 19:33:54.509166   33509 command_runner.go:130] > # metrics_key = ""
	I0103 19:33:54.509174   33509 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0103 19:33:54.509181   33509 command_runner.go:130] > [crio.tracing]
	I0103 19:33:54.509186   33509 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0103 19:33:54.509192   33509 command_runner.go:130] > # enable_tracing = false
	I0103 19:33:54.509198   33509 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0103 19:33:54.509205   33509 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0103 19:33:54.509213   33509 command_runner.go:130] > # Number of samples to collect per million spans.
	I0103 19:33:54.509220   33509 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0103 19:33:54.509226   33509 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0103 19:33:54.509232   33509 command_runner.go:130] > [crio.stats]
	I0103 19:33:54.509238   33509 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0103 19:33:54.509245   33509 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0103 19:33:54.509249   33509 command_runner.go:130] > # stats_collection_period = 0
	I0103 19:33:54.509282   33509 command_runner.go:130] ! time="2024-01-03 19:33:54.488655086Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0103 19:33:54.509300   33509 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0103 19:33:54.509365   33509 cni.go:84] Creating CNI manager for ""
	I0103 19:33:54.509398   33509 cni.go:136] 3 nodes found, recommending kindnet
	I0103 19:33:54.509407   33509 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:33:54.509424   33509 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484895 NodeName:multinode-484895-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 19:33:54.509522   33509 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-484895-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:33:54.509568   33509 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-484895-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:33:54.509617   33509 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 19:33:54.519446   33509 command_runner.go:130] > kubeadm
	I0103 19:33:54.519470   33509 command_runner.go:130] > kubectl
	I0103 19:33:54.519475   33509 command_runner.go:130] > kubelet
	I0103 19:33:54.519498   33509 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:33:54.519554   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0103 19:33:54.528802   33509 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0103 19:33:54.544514   33509 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 19:33:54.559505   33509 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I0103 19:33:54.563116   33509 command_runner.go:130] > 192.168.39.191	control-plane.minikube.internal
	I0103 19:33:54.563271   33509 host.go:66] Checking if "multinode-484895" exists ...
	I0103 19:33:54.563520   33509 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:33:54.563608   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:33:54.563636   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:33:54.577974   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0103 19:33:54.578373   33509 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:33:54.578829   33509 main.go:141] libmachine: Using API Version  1
	I0103 19:33:54.578852   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:33:54.579157   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:33:54.579318   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:33:54.579454   33509 start.go:304] JoinCluster: &{Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:33:54.579576   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0103 19:33:54.579591   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:33:54.582254   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:33:54.582705   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:33:54.582736   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:33:54.582912   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:33:54.583086   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:33:54.583251   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:33:54.583418   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:33:54.763423   33509 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 3mpgss.u600me6wnk7bfson --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 
	I0103 19:33:54.763472   33509 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:33:54.763499   33509 host.go:66] Checking if "multinode-484895" exists ...
	I0103 19:33:54.763949   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:33:54.763988   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:33:54.778438   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45867
	I0103 19:33:54.778908   33509 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:33:54.779305   33509 main.go:141] libmachine: Using API Version  1
	I0103 19:33:54.779329   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:33:54.779691   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:33:54.779877   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:33:54.780049   33509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484895-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0103 19:33:54.780068   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:33:54.782953   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:33:54.783401   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:33:54.783429   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:33:54.783569   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:33:54.783732   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:33:54.783878   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:33:54.783992   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:33:54.973711   33509 command_runner.go:130] > node/multinode-484895-m02 cordoned
	I0103 19:33:57.022579   33509 command_runner.go:130] > pod "busybox-5bc68d56bd-lmcnh" has DeletionTimestamp older than 1 seconds, skipping
	I0103 19:33:57.022603   33509 command_runner.go:130] > node/multinode-484895-m02 drained
	I0103 19:33:57.024289   33509 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0103 19:33:57.024314   33509 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-lfkpk, kube-system/kube-proxy-k7jnm
	I0103 19:33:57.024340   33509 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484895-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (2.244267327s)
	I0103 19:33:57.024357   33509 node.go:108] successfully drained node "m02"
	I0103 19:33:57.024795   33509 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:33:57.025037   33509 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:33:57.025381   33509 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0103 19:33:57.025450   33509 round_trippers.go:463] DELETE https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:33:57.025462   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:57.025473   33509 round_trippers.go:473]     Content-Type: application/json
	I0103 19:33:57.025482   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:57.025492   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:57.038020   33509 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0103 19:33:57.038049   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:57.038059   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:57 GMT
	I0103 19:33:57.038068   33509 round_trippers.go:580]     Audit-Id: aa629b77-3af2-4a10-becb-53a9afa0c2a4
	I0103 19:33:57.038075   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:57.038083   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:57.038090   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:57.038106   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:57.038114   33509 round_trippers.go:580]     Content-Length: 171
	I0103 19:33:57.038138   33509 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-484895-m02","kind":"nodes","uid":"7da57402-60a6-432d-91c4-768d87ae2e5f"}}
	I0103 19:33:57.038171   33509 node.go:124] successfully deleted node "m02"
	I0103 19:33:57.038183   33509 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:33:57.038207   33509 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:33:57.038231   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3mpgss.u600me6wnk7bfson --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-484895-m02"
	I0103 19:33:57.096481   33509 command_runner.go:130] ! W0103 19:33:57.085623    2638 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0103 19:33:57.096507   33509 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0103 19:33:57.250269   33509 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0103 19:33:57.250297   33509 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0103 19:33:58.007971   33509 command_runner.go:130] > [preflight] Running pre-flight checks
	I0103 19:33:58.008000   33509 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0103 19:33:58.008009   33509 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0103 19:33:58.008017   33509 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:33:58.008024   33509 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:33:58.008030   33509 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0103 19:33:58.008040   33509 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0103 19:33:58.008049   33509 command_runner.go:130] > This node has joined the cluster:
	I0103 19:33:58.008059   33509 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0103 19:33:58.008072   33509 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0103 19:33:58.008084   33509 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0103 19:33:58.008109   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0103 19:33:58.326783   33509 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=multinode-484895 minikube.k8s.io/updated_at=2024_01_03T19_33_58_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:33:58.453109   33509 command_runner.go:130] > node/multinode-484895-m02 labeled
	I0103 19:33:58.453139   33509 command_runner.go:130] > node/multinode-484895-m03 labeled
	I0103 19:33:58.453165   33509 start.go:306] JoinCluster complete in 3.873711876s
	I0103 19:33:58.453178   33509 cni.go:84] Creating CNI manager for ""
	I0103 19:33:58.453185   33509 cni.go:136] 3 nodes found, recommending kindnet
	I0103 19:33:58.453261   33509 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 19:33:58.463240   33509 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0103 19:33:58.463269   33509 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0103 19:33:58.463282   33509 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0103 19:33:58.463291   33509 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:33:58.463311   33509 command_runner.go:130] > Access: 2024-01-03 19:31:29.762982388 +0000
	I0103 19:33:58.463319   33509 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0103 19:33:58.463327   33509 command_runner.go:130] > Change: 2024-01-03 19:31:27.994982388 +0000
	I0103 19:33:58.463333   33509 command_runner.go:130] >  Birth: -
	I0103 19:33:58.465784   33509 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 19:33:58.465809   33509 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 19:33:58.482970   33509 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 19:33:58.840778   33509 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:33:58.844639   33509 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:33:58.847107   33509 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0103 19:33:58.856953   33509 command_runner.go:130] > daemonset.apps/kindnet configured
	I0103 19:33:58.860335   33509 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:33:58.860630   33509 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:33:58.860997   33509 round_trippers.go:463] GET https://192.168.39.191:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:33:58.861017   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.861029   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.861039   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.863300   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:33:58.863323   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.863333   33509 round_trippers.go:580]     Audit-Id: c0cc797d-48f8-4760-9e93-d6af5152419d
	I0103 19:33:58.863342   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.863350   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.863358   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.863367   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.863375   33509 round_trippers.go:580]     Content-Length: 291
	I0103 19:33:58.863388   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.863411   33509 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e2317390-8a66-46be-8656-5adca86177ea","resourceVersion":"854","creationTimestamp":"2024-01-03T19:21:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0103 19:33:58.863523   33509 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484895" context rescaled to 1 replicas
	I0103 19:33:58.863568   33509 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:33:58.866759   33509 out.go:177] * Verifying Kubernetes components...
	I0103 19:33:58.868130   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:33:58.881552   33509 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:33:58.881838   33509 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:33:58.882049   33509 node_ready.go:35] waiting up to 6m0s for node "multinode-484895-m02" to be "Ready" ...
	I0103 19:33:58.882139   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:33:58.882148   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.882156   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.882162   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.884561   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:33:58.884587   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.884596   33509 round_trippers.go:580]     Audit-Id: b910acf0-6459-46a2-9f41-a575868bcd3b
	I0103 19:33:58.884607   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.884615   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.884623   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.884632   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.884640   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.884910   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"26e72b14-f775-4f90-838e-83277742fe57","resourceVersion":"996","creationTimestamp":"2024-01-03T19:33:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_33_58_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:33:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3992 chars]
	I0103 19:33:58.885257   33509 node_ready.go:49] node "multinode-484895-m02" has status "Ready":"True"
	I0103 19:33:58.885276   33509 node_ready.go:38] duration metric: took 3.212397ms waiting for node "multinode-484895-m02" to be "Ready" ...
	I0103 19:33:58.885286   33509 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:33:58.885350   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:33:58.885359   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.885366   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.885372   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.888904   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:33:58.888926   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.888935   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.888942   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.888949   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.888958   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.888964   33509 round_trippers.go:580]     Audit-Id: 9631082e-722e-4036-81fb-2bc2f28159f9
	I0103 19:33:58.888971   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.889933   33509 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1003"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"833","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82238 chars]
	I0103 19:33:58.892497   33509 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:33:58.892572   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:33:58.892580   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.892587   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.892593   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.894625   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:33:58.894646   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.894655   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.894663   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.894688   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.894697   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.894709   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.894717   33509 round_trippers.go:580]     Audit-Id: 04d73790-1703-4083-8126-1d54bbb76d46
	I0103 19:33:58.894891   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"833","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0103 19:33:58.895418   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:33:58.895437   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.895449   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.895459   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.900947   33509 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0103 19:33:58.900976   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.900987   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.900996   33509 round_trippers.go:580]     Audit-Id: 41e2a8a1-c370-4dd4-980b-8c46c68b7b88
	I0103 19:33:58.901004   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.901020   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.901026   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.901036   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.901191   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:33:58.901603   33509 pod_ready.go:92] pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace has status "Ready":"True"
	I0103 19:33:58.901627   33509 pod_ready.go:81] duration metric: took 9.107918ms waiting for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:33:58.901639   33509 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:33:58.901706   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484895
	I0103 19:33:58.901721   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.901735   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.901748   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.904476   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:33:58.904492   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.904500   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.904508   33509 round_trippers.go:580]     Audit-Id: d5eb7ef9-f5f4-45a1-bc8d-1d89c9363334
	I0103 19:33:58.904516   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.904524   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.904539   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.904551   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.904815   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484895","namespace":"kube-system","uid":"2b5f9dc7-2d61-4968-9b9a-cfc029c9522b","resourceVersion":"825","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.191:2379","kubernetes.io/config.hash":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.mirror":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.seen":"2024-01-03T19:21:43.948366778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0103 19:33:58.905249   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:33:58.905262   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.905269   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.905302   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.907444   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:33:58.907462   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.907471   33509 round_trippers.go:580]     Audit-Id: 46f313ed-4280-41da-bcc6-ad9664696c6e
	I0103 19:33:58.907480   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.907489   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.907501   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.907510   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.907522   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.907865   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:33:58.908218   33509 pod_ready.go:92] pod "etcd-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:33:58.908237   33509 pod_ready.go:81] duration metric: took 6.586317ms waiting for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:33:58.908257   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:33:58.908324   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484895
	I0103 19:33:58.908335   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.908344   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.908353   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.910333   33509 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:33:58.910389   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.910405   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.910416   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.910434   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.910442   33509 round_trippers.go:580]     Audit-Id: 99c670ee-aa9b-438b-9f5a-9b7c73c24f31
	I0103 19:33:58.910449   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.910457   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.910560   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484895","namespace":"kube-system","uid":"f9f36416-b761-4534-8e09-bc3c94813149","resourceVersion":"827","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.191:8443","kubernetes.io/config.hash":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.mirror":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.seen":"2024-01-03T19:21:43.948370781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0103 19:33:58.911074   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:33:58.911086   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.911097   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.911107   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.913002   33509 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:33:58.913021   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.913028   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.913033   33509 round_trippers.go:580]     Audit-Id: 0761e322-0f02-4461-87d5-2f6aa558f308
	I0103 19:33:58.913038   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.913043   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.913049   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.913057   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.913204   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:33:58.913471   33509 pod_ready.go:92] pod "kube-apiserver-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:33:58.913496   33509 pod_ready.go:81] duration metric: took 5.226801ms waiting for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:33:58.913504   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:33:58.913551   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:33:58.913554   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.913561   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.913566   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.915828   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:33:58.915848   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.915857   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.915874   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.915882   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.915893   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.915903   33509 round_trippers.go:580]     Audit-Id: 20064922-c758-426d-ad86-d555a863aecc
	I0103 19:33:58.915913   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.916053   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484895","namespace":"kube-system","uid":"a04de258-1f92-4ac7-8f30-18ad9ebb6d40","resourceVersion":"838","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.mirror":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.seen":"2024-01-03T19:21:43.948371847Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0103 19:33:58.916518   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:33:58.916531   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:58.916538   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:58.916544   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:58.918350   33509 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:33:58.918361   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:58.918367   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:58 GMT
	I0103 19:33:58.918373   33509 round_trippers.go:580]     Audit-Id: 34b54130-dea3-484a-84ab-9837fd20d2bf
	I0103 19:33:58.918381   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:58.918397   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:58.918405   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:58.918412   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:58.918595   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:33:58.918897   33509 pod_ready.go:92] pod "kube-controller-manager-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:33:58.918912   33509 pod_ready.go:81] duration metric: took 5.401921ms waiting for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:33:58.918920   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:33:59.082245   33509 request.go:629] Waited for 163.271931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:33:59.082318   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:33:59.082323   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:59.082331   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:59.082337   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:59.085099   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:33:59.085123   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:59.085134   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:59 GMT
	I0103 19:33:59.085144   33509 round_trippers.go:580]     Audit-Id: 44065079-e7c3-46ab-ab67-8213e5ee4724
	I0103 19:33:59.085151   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:59.085158   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:59.085165   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:59.085190   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:59.085390   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k7jnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3","resourceVersion":"1000","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0103 19:33:59.282232   33509 request.go:629] Waited for 196.294986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:33:59.282293   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:33:59.282298   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:59.282306   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:59.282311   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:59.285529   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:33:59.285555   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:59.285566   33509 round_trippers.go:580]     Audit-Id: 753cd971-c38d-4d2b-93e8-4426c696813f
	I0103 19:33:59.285575   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:59.285583   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:59.285591   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:59.285601   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:59.285609   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:59 GMT
	I0103 19:33:59.286015   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"26e72b14-f775-4f90-838e-83277742fe57","resourceVersion":"996","creationTimestamp":"2024-01-03T19:33:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_33_58_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:33:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3992 chars]
	I0103 19:33:59.482982   33509 request.go:629] Waited for 63.609135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:33:59.483049   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:33:59.483055   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:59.483062   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:59.483072   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:59.485726   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:33:59.485749   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:59.485759   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:59.485766   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:59.485774   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:59.485782   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:59.485791   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:59 GMT
	I0103 19:33:59.485803   33509 round_trippers.go:580]     Audit-Id: d32cb728-f119-45f3-8b4f-2dd9bc8cbd57
	I0103 19:33:59.486501   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k7jnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3","resourceVersion":"1000","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0103 19:33:59.682241   33509 request.go:629] Waited for 195.344689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:33:59.682335   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:33:59.682343   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:59.682353   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:59.682363   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:59.690327   33509 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0103 19:33:59.690378   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:59.690389   33509 round_trippers.go:580]     Audit-Id: 6ee9714a-ca2f-4285-b285-d61c4c053770
	I0103 19:33:59.690398   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:59.690406   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:59.690414   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:59.690424   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:59.690433   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:59 GMT
	I0103 19:33:59.690580   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"26e72b14-f775-4f90-838e-83277742fe57","resourceVersion":"996","creationTimestamp":"2024-01-03T19:33:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_33_58_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:33:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3992 chars]
	I0103 19:33:59.919122   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:33:59.919144   33509 round_trippers.go:469] Request Headers:
	I0103 19:33:59.919156   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:33:59.919164   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:33:59.921661   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:33:59.921684   33509 round_trippers.go:577] Response Headers:
	I0103 19:33:59.921694   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:33:59.921702   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:33:59.921707   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:33:59.921712   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:33:59 GMT
	I0103 19:33:59.921717   33509 round_trippers.go:580]     Audit-Id: ee7bb4ae-a717-48fd-bf01-4c56b83131f2
	I0103 19:33:59.921726   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:33:59.921894   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k7jnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3","resourceVersion":"1014","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0103 19:34:00.082737   33509 request.go:629] Waited for 160.351534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:34:00.082819   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:34:00.082827   33509 round_trippers.go:469] Request Headers:
	I0103 19:34:00.082842   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:34:00.082860   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:34:00.085423   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:34:00.085446   33509 round_trippers.go:577] Response Headers:
	I0103 19:34:00.085455   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:34:00.085464   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:34:00 GMT
	I0103 19:34:00.085471   33509 round_trippers.go:580]     Audit-Id: b5bb3afc-6b36-4887-a087-f0a0ca81cc45
	I0103 19:34:00.085478   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:34:00.085486   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:34:00.085494   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:34:00.085616   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"26e72b14-f775-4f90-838e-83277742fe57","resourceVersion":"996","creationTimestamp":"2024-01-03T19:33:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_33_58_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:33:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3992 chars]
	I0103 19:34:00.085989   33509 pod_ready.go:92] pod "kube-proxy-k7jnm" in "kube-system" namespace has status "Ready":"True"
	I0103 19:34:00.086012   33509 pod_ready.go:81] duration metric: took 1.167084361s waiting for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:34:00.086032   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-strp6" in "kube-system" namespace to be "Ready" ...
	I0103 19:34:00.282369   33509 request.go:629] Waited for 196.27476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:34:00.282432   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:34:00.282437   33509 round_trippers.go:469] Request Headers:
	I0103 19:34:00.282445   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:34:00.282451   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:34:00.285090   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:34:00.285114   33509 round_trippers.go:577] Response Headers:
	I0103 19:34:00.285122   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:34:00 GMT
	I0103 19:34:00.285127   33509 round_trippers.go:580]     Audit-Id: b764bbba-b909-47aa-ba1a-bd649e6e8197
	I0103 19:34:00.285136   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:34:00.285141   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:34:00.285146   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:34:00.285151   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:34:00.285306   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-strp6","generateName":"kube-proxy-","namespace":"kube-system","uid":"f16942b4-2697-4fd7-88f7-3699e16bff79","resourceVersion":"677","creationTimestamp":"2024-01-03T19:23:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:23:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0103 19:34:00.483097   33509 request.go:629] Waited for 197.380698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:34:00.483183   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:34:00.483192   33509 round_trippers.go:469] Request Headers:
	I0103 19:34:00.483203   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:34:00.483214   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:34:00.486152   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:34:00.486180   33509 round_trippers.go:577] Response Headers:
	I0103 19:34:00.486187   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:34:00.486192   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:34:00.486197   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:34:00 GMT
	I0103 19:34:00.486202   33509 round_trippers.go:580]     Audit-Id: 19f35944-4ec2-4a38-b3ba-227399c3704e
	I0103 19:34:00.486208   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:34:00.486213   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:34:00.486700   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m03","uid":"a1762911-aa8b-49cb-8632-51fb5a4220e2","resourceVersion":"997","creationTimestamp":"2024-01-03T19:24:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_33_58_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:24:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0103 19:34:00.486958   33509 pod_ready.go:92] pod "kube-proxy-strp6" in "kube-system" namespace has status "Ready":"True"
	I0103 19:34:00.486971   33509 pod_ready.go:81] duration metric: took 400.932547ms waiting for pod "kube-proxy-strp6" in "kube-system" namespace to be "Ready" ...
	I0103 19:34:00.486980   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:34:00.683185   33509 request.go:629] Waited for 196.129193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:34:00.683243   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:34:00.683248   33509 round_trippers.go:469] Request Headers:
	I0103 19:34:00.683255   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:34:00.683263   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:34:00.686125   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:34:00.686147   33509 round_trippers.go:577] Response Headers:
	I0103 19:34:00.686154   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:34:00.686160   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:34:00 GMT
	I0103 19:34:00.686165   33509 round_trippers.go:580]     Audit-Id: 26a8b653-15ae-4766-8e5a-338a5617f444
	I0103 19:34:00.686170   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:34:00.686178   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:34:00.686186   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:34:00.686388   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp9s2","generateName":"kube-proxy-","namespace":"kube-system","uid":"728b1db9-b145-4ad3-b366-7fd8306d7a2a","resourceVersion":"757","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0103 19:34:00.882188   33509 request.go:629] Waited for 195.331759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:34:00.882266   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:34:00.882272   33509 round_trippers.go:469] Request Headers:
	I0103 19:34:00.882279   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:34:00.882285   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:34:00.885448   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:34:00.885471   33509 round_trippers.go:577] Response Headers:
	I0103 19:34:00.885477   33509 round_trippers.go:580]     Audit-Id: 3556ae2d-4a1a-44b0-a1b9-80345d688472
	I0103 19:34:00.885483   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:34:00.885488   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:34:00.885497   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:34:00.885502   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:34:00.885507   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:34:00 GMT
	I0103 19:34:00.885898   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:34:00.886191   33509 pod_ready.go:92] pod "kube-proxy-tp9s2" in "kube-system" namespace has status "Ready":"True"
	I0103 19:34:00.886205   33509 pod_ready.go:81] duration metric: took 399.219684ms waiting for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:34:00.886213   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:34:01.082819   33509 request.go:629] Waited for 196.531467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:34:01.082889   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:34:01.082895   33509 round_trippers.go:469] Request Headers:
	I0103 19:34:01.082905   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:34:01.082920   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:34:01.085832   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:34:01.085859   33509 round_trippers.go:577] Response Headers:
	I0103 19:34:01.085869   33509 round_trippers.go:580]     Audit-Id: f1acbf47-3ee8-47ec-ace8-ee0d1b2afcd2
	I0103 19:34:01.085875   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:34:01.085880   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:34:01.085885   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:34:01.085890   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:34:01.085896   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:34:01 GMT
	I0103 19:34:01.086050   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484895","namespace":"kube-system","uid":"f981e6c0-1f4a-44ed-b043-c69ef28b4fa5","resourceVersion":"841","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.mirror":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.seen":"2024-01-03T19:21:43.948372698Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0103 19:34:01.282762   33509 request.go:629] Waited for 196.376403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:34:01.282845   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:34:01.282850   33509 round_trippers.go:469] Request Headers:
	I0103 19:34:01.282858   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:34:01.282867   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:34:01.285583   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:34:01.285602   33509 round_trippers.go:577] Response Headers:
	I0103 19:34:01.285609   33509 round_trippers.go:580]     Audit-Id: 5531f22a-bcf1-45b2-9253-0ecff3520b5e
	I0103 19:34:01.285614   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:34:01.285619   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:34:01.285632   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:34:01.285637   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:34:01.285642   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:34:01 GMT
	I0103 19:34:01.285767   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:34:01.286058   33509 pod_ready.go:92] pod "kube-scheduler-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:34:01.286071   33509 pod_ready.go:81] duration metric: took 399.852685ms waiting for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:34:01.286080   33509 pod_ready.go:38] duration metric: took 2.400775192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:34:01.286092   33509 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:34:01.286134   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:34:01.299341   33509 system_svc.go:56] duration metric: took 13.241141ms WaitForService to wait for kubelet.
	I0103 19:34:01.299369   33509 kubeadm.go:581] duration metric: took 2.435756319s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:34:01.299389   33509 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:34:01.482856   33509 request.go:629] Waited for 183.384321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes
	I0103 19:34:01.482906   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes
	I0103 19:34:01.482910   33509 round_trippers.go:469] Request Headers:
	I0103 19:34:01.482922   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:34:01.482931   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:34:01.485763   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:34:01.485782   33509 round_trippers.go:577] Response Headers:
	I0103 19:34:01.485789   33509 round_trippers.go:580]     Audit-Id: 48d1262a-89a1-4563-aed5-ba878c660e47
	I0103 19:34:01.485794   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:34:01.485799   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:34:01.485804   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:34:01.485814   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:34:01.485819   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:34:01 GMT
	I0103 19:34:01.486204   33509 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1017"},"items":[{"metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16208 chars]
	I0103 19:34:01.486890   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:34:01.486912   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:34:01.486923   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:34:01.486930   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:34:01.486935   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:34:01.486945   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:34:01.486951   33509 node_conditions.go:105] duration metric: took 187.556662ms to run NodePressure ...
	I0103 19:34:01.486962   33509 start.go:228] waiting for startup goroutines ...
	I0103 19:34:01.486983   33509 start.go:242] writing updated cluster config ...
	I0103 19:34:01.487442   33509 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:34:01.487545   33509 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:34:01.490475   33509 out.go:177] * Starting worker node multinode-484895-m03 in cluster multinode-484895
	I0103 19:34:01.491816   33509 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:34:01.491838   33509 cache.go:56] Caching tarball of preloaded images
	I0103 19:34:01.491941   33509 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:34:01.491955   33509 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:34:01.492065   33509 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/config.json ...
	I0103 19:34:01.492266   33509 start.go:365] acquiring machines lock for multinode-484895-m03: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:34:01.492311   33509 start.go:369] acquired machines lock for "multinode-484895-m03" in 24.542µs
	I0103 19:34:01.492329   33509 start.go:96] Skipping create...Using existing machine configuration
	I0103 19:34:01.492338   33509 fix.go:54] fixHost starting: m03
	I0103 19:34:01.492618   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:34:01.492642   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:34:01.506651   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33237
	I0103 19:34:01.507059   33509 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:34:01.507503   33509 main.go:141] libmachine: Using API Version  1
	I0103 19:34:01.507525   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:34:01.507833   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:34:01.508039   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .DriverName
	I0103 19:34:01.508182   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetState
	I0103 19:34:01.509742   33509 fix.go:102] recreateIfNeeded on multinode-484895-m03: state=Running err=<nil>
	W0103 19:34:01.509763   33509 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 19:34:01.512632   33509 out.go:177] * Updating the running kvm2 "multinode-484895-m03" VM ...
	I0103 19:34:01.514214   33509 machine.go:88] provisioning docker machine ...
	I0103 19:34:01.514238   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .DriverName
	I0103 19:34:01.514488   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetMachineName
	I0103 19:34:01.514687   33509 buildroot.go:166] provisioning hostname "multinode-484895-m03"
	I0103 19:34:01.514711   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetMachineName
	I0103 19:34:01.514843   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHHostname
	I0103 19:34:01.517971   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:01.518435   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:34:01.518456   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:01.518634   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHPort
	I0103 19:34:01.518824   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:34:01.518959   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:34:01.519130   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHUsername
	I0103 19:34:01.519300   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:34:01.519623   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0103 19:34:01.519653   33509 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484895-m03 && echo "multinode-484895-m03" | sudo tee /etc/hostname
	I0103 19:34:01.672251   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484895-m03
	
	I0103 19:34:01.672283   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHHostname
	I0103 19:34:01.675695   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:01.676044   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:34:01.676076   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:01.676361   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHPort
	I0103 19:34:01.676553   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:34:01.676758   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:34:01.676912   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHUsername
	I0103 19:34:01.677093   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:34:01.677416   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0103 19:34:01.677433   33509 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484895-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484895-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484895-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:34:01.815511   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:34:01.815541   33509 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 19:34:01.815559   33509 buildroot.go:174] setting up certificates
	I0103 19:34:01.815570   33509 provision.go:83] configureAuth start
	I0103 19:34:01.815583   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetMachineName
	I0103 19:34:01.815852   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetIP
	I0103 19:34:01.818721   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:01.819044   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:34:01.819074   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:01.819194   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHHostname
	I0103 19:34:01.821191   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:01.821545   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:34:01.821573   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:01.821706   33509 provision.go:138] copyHostCerts
	I0103 19:34:01.821736   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:34:01.821762   33509 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 19:34:01.821771   33509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:34:01.821835   33509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 19:34:01.821901   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:34:01.821918   33509 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 19:34:01.821924   33509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:34:01.821946   33509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 19:34:01.821995   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:34:01.822010   33509 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 19:34:01.822016   33509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:34:01.822041   33509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 19:34:01.822111   33509 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.multinode-484895-m03 san=[192.168.39.156 192.168.39.156 localhost 127.0.0.1 minikube multinode-484895-m03]
	I0103 19:34:02.019847   33509 provision.go:172] copyRemoteCerts
	I0103 19:34:02.019903   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:34:02.019928   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHHostname
	I0103 19:34:02.022739   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:02.023085   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:34:02.023115   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:02.023300   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHPort
	I0103 19:34:02.023485   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:34:02.023642   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHUsername
	I0103 19:34:02.023804   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m03/id_rsa Username:docker}
	I0103 19:34:02.120468   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 19:34:02.120557   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:34:02.143285   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 19:34:02.143351   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0103 19:34:02.166334   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 19:34:02.166400   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 19:34:02.190160   33509 provision.go:86] duration metric: configureAuth took 374.575635ms
	I0103 19:34:02.190194   33509 buildroot.go:189] setting minikube options for container-runtime
	I0103 19:34:02.190488   33509 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:34:02.190608   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHHostname
	I0103 19:34:02.193551   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:02.194071   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:34:02.194106   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:34:02.194306   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHPort
	I0103 19:34:02.194563   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:34:02.194749   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:34:02.194945   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHUsername
	I0103 19:34:02.195139   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:34:02.195444   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0103 19:34:02.195459   33509 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:35:32.891711   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:35:32.891746   33509 machine.go:91] provisioned docker machine in 1m31.377515225s
	I0103 19:35:32.891757   33509 start.go:300] post-start starting for "multinode-484895-m03" (driver="kvm2")
	I0103 19:35:32.891769   33509 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:35:32.891797   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .DriverName
	I0103 19:35:32.892165   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:35:32.892207   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHHostname
	I0103 19:35:32.895434   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:32.895977   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:35:32.896009   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:32.896231   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHPort
	I0103 19:35:32.896406   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:35:32.896629   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHUsername
	I0103 19:35:32.896802   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m03/id_rsa Username:docker}
	I0103 19:35:32.994036   33509 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:35:32.998275   33509 command_runner.go:130] > NAME=Buildroot
	I0103 19:35:32.998303   33509 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0103 19:35:32.998309   33509 command_runner.go:130] > ID=buildroot
	I0103 19:35:32.998316   33509 command_runner.go:130] > VERSION_ID=2021.02.12
	I0103 19:35:32.998323   33509 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0103 19:35:32.998503   33509 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 19:35:32.998549   33509 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 19:35:32.998620   33509 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 19:35:32.998726   33509 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 19:35:32.998738   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0103 19:35:32.998843   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:35:33.007497   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:35:33.030248   33509 start.go:303] post-start completed in 138.476232ms
	I0103 19:35:33.030277   33509 fix.go:56] fixHost completed within 1m31.537938554s
	I0103 19:35:33.030297   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHHostname
	I0103 19:35:33.033185   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:33.033580   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:35:33.033611   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:33.033766   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHPort
	I0103 19:35:33.033965   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:35:33.034141   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:35:33.034294   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHUsername
	I0103 19:35:33.034459   33509 main.go:141] libmachine: Using SSH client type: native
	I0103 19:35:33.034850   33509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0103 19:35:33.034863   33509 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 19:35:33.163631   33509 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704310533.153262117
	
	I0103 19:35:33.163653   33509 fix.go:206] guest clock: 1704310533.153262117
	I0103 19:35:33.163660   33509 fix.go:219] Guest: 2024-01-03 19:35:33.153262117 +0000 UTC Remote: 2024-01-03 19:35:33.030281591 +0000 UTC m=+553.599967950 (delta=122.980526ms)
	I0103 19:35:33.163673   33509 fix.go:190] guest clock delta is within tolerance: 122.980526ms
	I0103 19:35:33.163677   33509 start.go:83] releasing machines lock for "multinode-484895-m03", held for 1m31.671355268s
	I0103 19:35:33.163697   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .DriverName
	I0103 19:35:33.163960   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetIP
	I0103 19:35:33.166983   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:33.167363   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:35:33.167394   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:33.169601   33509 out.go:177] * Found network options:
	I0103 19:35:33.171307   33509 out.go:177]   - NO_PROXY=192.168.39.191,192.168.39.86
	W0103 19:35:33.172700   33509 proxy.go:119] fail to check proxy env: Error ip not in block
	W0103 19:35:33.172722   33509 proxy.go:119] fail to check proxy env: Error ip not in block
	I0103 19:35:33.172737   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .DriverName
	I0103 19:35:33.173385   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .DriverName
	I0103 19:35:33.173607   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .DriverName
	I0103 19:35:33.173727   33509 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:35:33.173779   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHHostname
	W0103 19:35:33.173801   33509 proxy.go:119] fail to check proxy env: Error ip not in block
	W0103 19:35:33.173818   33509 proxy.go:119] fail to check proxy env: Error ip not in block
	I0103 19:35:33.173874   33509 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:35:33.173894   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHHostname
	I0103 19:35:33.176634   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:33.176829   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:33.176990   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:35:33.177125   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:33.177162   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:35:33.177193   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:33.177341   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHPort
	I0103 19:35:33.177454   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHPort
	I0103 19:35:33.177540   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:35:33.177610   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHKeyPath
	I0103 19:35:33.177673   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHUsername
	I0103 19:35:33.177731   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetSSHUsername
	I0103 19:35:33.177786   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m03/id_rsa Username:docker}
	I0103 19:35:33.177838   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m03/id_rsa Username:docker}
	I0103 19:35:33.413174   33509 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0103 19:35:33.413310   33509 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:35:33.418974   33509 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0103 19:35:33.419028   33509 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 19:35:33.419095   33509 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:35:33.428278   33509 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0103 19:35:33.428310   33509 start.go:475] detecting cgroup driver to use...
	I0103 19:35:33.428370   33509 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:35:33.443378   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:35:33.457051   33509 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:35:33.457116   33509 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:35:33.472034   33509 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:35:33.485468   33509 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:35:33.612817   33509 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:35:33.732083   33509 docker.go:219] disabling docker service ...
	I0103 19:35:33.732159   33509 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:35:33.746950   33509 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:35:33.760188   33509 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:35:33.888272   33509 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:35:34.015332   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:35:34.027947   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:35:34.044547   33509 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0103 19:35:34.044586   33509 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 19:35:34.044632   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:35:34.054375   33509 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:35:34.054453   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:35:34.063136   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:35:34.073752   33509 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:35:34.083765   33509 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:35:34.093856   33509 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:35:34.102731   33509 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0103 19:35:34.102808   33509 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:35:34.111714   33509 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:35:34.250688   33509 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:35:34.471130   33509 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:35:34.471200   33509 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:35:34.476290   33509 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0103 19:35:34.476311   33509 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0103 19:35:34.476320   33509 command_runner.go:130] > Device: 16h/22d	Inode: 1151        Links: 1
	I0103 19:35:34.476326   33509 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:35:34.476333   33509 command_runner.go:130] > Access: 2024-01-03 19:35:34.461866568 +0000
	I0103 19:35:34.476340   33509 command_runner.go:130] > Modify: 2024-01-03 19:35:34.394861825 +0000
	I0103 19:35:34.476345   33509 command_runner.go:130] > Change: 2024-01-03 19:35:34.394861825 +0000
	I0103 19:35:34.476350   33509 command_runner.go:130] >  Birth: -
	I0103 19:35:34.476529   33509 start.go:543] Will wait 60s for crictl version
	I0103 19:35:34.476580   33509 ssh_runner.go:195] Run: which crictl
	I0103 19:35:34.481130   33509 command_runner.go:130] > /usr/bin/crictl
	I0103 19:35:34.481782   33509 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:35:34.521870   33509 command_runner.go:130] > Version:  0.1.0
	I0103 19:35:34.521892   33509 command_runner.go:130] > RuntimeName:  cri-o
	I0103 19:35:34.521896   33509 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0103 19:35:34.521902   33509 command_runner.go:130] > RuntimeApiVersion:  v1
	I0103 19:35:34.523034   33509 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 19:35:34.523119   33509 ssh_runner.go:195] Run: crio --version
	I0103 19:35:34.576107   33509 command_runner.go:130] > crio version 1.24.1
	I0103 19:35:34.576128   33509 command_runner.go:130] > Version:          1.24.1
	I0103 19:35:34.576135   33509 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:35:34.576139   33509 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:35:34.576144   33509 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:35:34.576149   33509 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:35:34.576154   33509 command_runner.go:130] > Compiler:         gc
	I0103 19:35:34.576158   33509 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:35:34.576163   33509 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:35:34.576170   33509 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:35:34.576174   33509 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:35:34.576179   33509 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:35:34.577680   33509 ssh_runner.go:195] Run: crio --version
	I0103 19:35:34.625988   33509 command_runner.go:130] > crio version 1.24.1
	I0103 19:35:34.626008   33509 command_runner.go:130] > Version:          1.24.1
	I0103 19:35:34.626014   33509 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0103 19:35:34.626019   33509 command_runner.go:130] > GitTreeState:     dirty
	I0103 19:35:34.626025   33509 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0103 19:35:34.626030   33509 command_runner.go:130] > GoVersion:        go1.19.9
	I0103 19:35:34.626034   33509 command_runner.go:130] > Compiler:         gc
	I0103 19:35:34.626038   33509 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:35:34.626043   33509 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:35:34.626050   33509 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:35:34.626054   33509 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:35:34.626058   33509 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:35:34.629691   33509 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 19:35:34.631252   33509 out.go:177]   - env NO_PROXY=192.168.39.191
	I0103 19:35:34.632899   33509 out.go:177]   - env NO_PROXY=192.168.39.191,192.168.39.86
	I0103 19:35:34.634239   33509 main.go:141] libmachine: (multinode-484895-m03) Calling .GetIP
	I0103 19:35:34.637531   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:34.637906   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:34:2e", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:24:00 +0000 UTC Type:0 Mac:52:54:00:a2:34:2e Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-484895-m03 Clientid:01:52:54:00:a2:34:2e}
	I0103 19:35:34.637936   33509 main.go:141] libmachine: (multinode-484895-m03) DBG | domain multinode-484895-m03 has defined IP address 192.168.39.156 and MAC address 52:54:00:a2:34:2e in network mk-multinode-484895
	I0103 19:35:34.638104   33509 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 19:35:34.642421   33509 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0103 19:35:34.642557   33509 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895 for IP: 192.168.39.156
	I0103 19:35:34.642586   33509 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:35:34.642729   33509 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 19:35:34.642781   33509 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 19:35:34.642798   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 19:35:34.642818   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 19:35:34.642836   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 19:35:34.642858   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 19:35:34.642924   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 19:35:34.642966   33509 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 19:35:34.642984   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:35:34.643023   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:35:34.643057   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:35:34.643091   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 19:35:34.643146   33509 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:35:34.643185   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0103 19:35:34.643204   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0103 19:35:34.643220   33509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:35:34.643530   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:35:34.666909   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 19:35:34.690969   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:35:34.714742   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 19:35:34.737679   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 19:35:34.763668   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 19:35:34.786778   33509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:35:34.809196   33509 ssh_runner.go:195] Run: openssl version
	I0103 19:35:34.814341   33509 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0103 19:35:34.814626   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 19:35:34.823911   33509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 19:35:34.828463   33509 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:35:34.828785   33509 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:35:34.828843   33509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 19:35:34.834442   33509 command_runner.go:130] > 51391683
	I0103 19:35:34.834508   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 19:35:34.842749   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 19:35:34.853048   33509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 19:35:34.857386   33509 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:35:34.857578   33509 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:35:34.857629   33509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 19:35:34.862691   33509 command_runner.go:130] > 3ec20f2e
	I0103 19:35:34.862957   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:35:34.871323   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:35:34.881724   33509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:35:34.886190   33509 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:35:34.886226   33509 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:35:34.886276   33509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:35:34.891861   33509 command_runner.go:130] > b5213941
	I0103 19:35:34.891924   33509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:35:34.899937   33509 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:35:34.903939   33509 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:35:34.904164   33509 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:35:34.904264   33509 ssh_runner.go:195] Run: crio config
	I0103 19:35:34.955030   33509 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0103 19:35:34.955056   33509 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0103 19:35:34.955066   33509 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0103 19:35:34.955072   33509 command_runner.go:130] > #
	I0103 19:35:34.955082   33509 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0103 19:35:34.955091   33509 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0103 19:35:34.955099   33509 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0103 19:35:34.955108   33509 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0103 19:35:34.955118   33509 command_runner.go:130] > # reload'.
	I0103 19:35:34.955128   33509 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0103 19:35:34.955139   33509 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0103 19:35:34.955155   33509 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0103 19:35:34.955161   33509 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0103 19:35:34.955165   33509 command_runner.go:130] > [crio]
	I0103 19:35:34.955172   33509 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0103 19:35:34.955184   33509 command_runner.go:130] > # containers images, in this directory.
	I0103 19:35:34.955202   33509 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0103 19:35:34.955219   33509 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0103 19:35:34.955429   33509 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0103 19:35:34.955454   33509 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0103 19:35:34.955465   33509 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0103 19:35:34.955583   33509 command_runner.go:130] > storage_driver = "overlay"
	I0103 19:35:34.955597   33509 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0103 19:35:34.955607   33509 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0103 19:35:34.955615   33509 command_runner.go:130] > storage_option = [
	I0103 19:35:34.955833   33509 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0103 19:35:34.955848   33509 command_runner.go:130] > ]
	I0103 19:35:34.955858   33509 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0103 19:35:34.955868   33509 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0103 19:35:34.956138   33509 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0103 19:35:34.956159   33509 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0103 19:35:34.956170   33509 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0103 19:35:34.956183   33509 command_runner.go:130] > # always happen on a node reboot
	I0103 19:35:34.956549   33509 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0103 19:35:34.956562   33509 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0103 19:35:34.956571   33509 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0103 19:35:34.956586   33509 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0103 19:35:34.956894   33509 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0103 19:35:34.956909   33509 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0103 19:35:34.956923   33509 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0103 19:35:34.957362   33509 command_runner.go:130] > # internal_wipe = true
	I0103 19:35:34.957381   33509 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0103 19:35:34.957392   33509 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0103 19:35:34.957401   33509 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0103 19:35:34.957616   33509 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0103 19:35:34.957637   33509 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0103 19:35:34.957654   33509 command_runner.go:130] > [crio.api]
	I0103 19:35:34.957666   33509 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0103 19:35:34.957902   33509 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0103 19:35:34.957912   33509 command_runner.go:130] > # IP address on which the stream server will listen.
	I0103 19:35:34.958397   33509 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0103 19:35:34.958409   33509 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0103 19:35:34.958414   33509 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0103 19:35:34.958790   33509 command_runner.go:130] > # stream_port = "0"
	I0103 19:35:34.958808   33509 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0103 19:35:34.959076   33509 command_runner.go:130] > # stream_enable_tls = false
	I0103 19:35:34.959088   33509 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0103 19:35:34.959378   33509 command_runner.go:130] > # stream_idle_timeout = ""
	I0103 19:35:34.959388   33509 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0103 19:35:34.959396   33509 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0103 19:35:34.959404   33509 command_runner.go:130] > # minutes.
	I0103 19:35:34.959665   33509 command_runner.go:130] > # stream_tls_cert = ""
	I0103 19:35:34.959676   33509 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0103 19:35:34.959682   33509 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0103 19:35:34.959917   33509 command_runner.go:130] > # stream_tls_key = ""
	I0103 19:35:34.959927   33509 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0103 19:35:34.959935   33509 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0103 19:35:34.959941   33509 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0103 19:35:34.960176   33509 command_runner.go:130] > # stream_tls_ca = ""
	I0103 19:35:34.960188   33509 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:35:34.960398   33509 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0103 19:35:34.960419   33509 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:35:34.960605   33509 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0103 19:35:34.960625   33509 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0103 19:35:34.960638   33509 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0103 19:35:34.960647   33509 command_runner.go:130] > [crio.runtime]
	I0103 19:35:34.960658   33509 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0103 19:35:34.960670   33509 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0103 19:35:34.960681   33509 command_runner.go:130] > # "nofile=1024:2048"
	I0103 19:35:34.960692   33509 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0103 19:35:34.960796   33509 command_runner.go:130] > # default_ulimits = [
	I0103 19:35:34.960948   33509 command_runner.go:130] > # ]
	I0103 19:35:34.960964   33509 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0103 19:35:34.961373   33509 command_runner.go:130] > # no_pivot = false
	I0103 19:35:34.961382   33509 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0103 19:35:34.961391   33509 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0103 19:35:34.962849   33509 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0103 19:35:34.962863   33509 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0103 19:35:34.962870   33509 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0103 19:35:34.962877   33509 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:35:34.962882   33509 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0103 19:35:34.962889   33509 command_runner.go:130] > # Cgroup setting for conmon
	I0103 19:35:34.962901   33509 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0103 19:35:34.962913   33509 command_runner.go:130] > conmon_cgroup = "pod"
	I0103 19:35:34.962926   33509 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0103 19:35:34.962935   33509 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0103 19:35:34.962944   33509 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:35:34.962950   33509 command_runner.go:130] > conmon_env = [
	I0103 19:35:34.962956   33509 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0103 19:35:34.962962   33509 command_runner.go:130] > ]
	I0103 19:35:34.962968   33509 command_runner.go:130] > # Additional environment variables to set for all the
	I0103 19:35:34.962975   33509 command_runner.go:130] > # containers. These are overridden if set in the
	I0103 19:35:34.962982   33509 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0103 19:35:34.962992   33509 command_runner.go:130] > # default_env = [
	I0103 19:35:34.963002   33509 command_runner.go:130] > # ]
	I0103 19:35:34.963013   33509 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0103 19:35:34.963024   33509 command_runner.go:130] > # selinux = false
	I0103 19:35:34.963037   33509 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0103 19:35:34.963046   33509 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0103 19:35:34.963054   33509 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0103 19:35:34.963059   33509 command_runner.go:130] > # seccomp_profile = ""
	I0103 19:35:34.963065   33509 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0103 19:35:34.963072   33509 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0103 19:35:34.963083   33509 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0103 19:35:34.963095   33509 command_runner.go:130] > # which might increase security.
	I0103 19:35:34.963106   33509 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0103 19:35:34.963117   33509 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0103 19:35:34.963131   33509 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0103 19:35:34.963149   33509 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0103 19:35:34.963162   33509 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0103 19:35:34.963170   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:35:34.963175   33509 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0103 19:35:34.963182   33509 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0103 19:35:34.963192   33509 command_runner.go:130] > # the cgroup blockio controller.
	I0103 19:35:34.963203   33509 command_runner.go:130] > # blockio_config_file = ""
	I0103 19:35:34.963214   33509 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0103 19:35:34.963225   33509 command_runner.go:130] > # irqbalance daemon.
	I0103 19:35:34.963234   33509 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0103 19:35:34.963248   33509 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0103 19:35:34.963259   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:35:34.963269   33509 command_runner.go:130] > # rdt_config_file = ""
	I0103 19:35:34.963280   33509 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0103 19:35:34.963284   33509 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0103 19:35:34.963297   33509 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0103 19:35:34.963308   33509 command_runner.go:130] > # separate_pull_cgroup = ""
	I0103 19:35:34.963321   33509 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0103 19:35:34.963335   33509 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0103 19:35:34.963345   33509 command_runner.go:130] > # will be added.
	I0103 19:35:34.963353   33509 command_runner.go:130] > # default_capabilities = [
	I0103 19:35:34.963362   33509 command_runner.go:130] > # 	"CHOWN",
	I0103 19:35:34.963369   33509 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0103 19:35:34.963377   33509 command_runner.go:130] > # 	"FSETID",
	I0103 19:35:34.963383   33509 command_runner.go:130] > # 	"FOWNER",
	I0103 19:35:34.963389   33509 command_runner.go:130] > # 	"SETGID",
	I0103 19:35:34.963399   33509 command_runner.go:130] > # 	"SETUID",
	I0103 19:35:34.963410   33509 command_runner.go:130] > # 	"SETPCAP",
	I0103 19:35:34.963417   33509 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0103 19:35:34.963427   33509 command_runner.go:130] > # 	"KILL",
	I0103 19:35:34.963436   33509 command_runner.go:130] > # ]
	I0103 19:35:34.963449   33509 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0103 19:35:34.963463   33509 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:35:34.963473   33509 command_runner.go:130] > # default_sysctls = [
	I0103 19:35:34.963479   33509 command_runner.go:130] > # ]
	I0103 19:35:34.963484   33509 command_runner.go:130] > # List of devices on the host that a
	I0103 19:35:34.963498   33509 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0103 19:35:34.963509   33509 command_runner.go:130] > # allowed_devices = [
	I0103 19:35:34.963516   33509 command_runner.go:130] > # 	"/dev/fuse",
	I0103 19:35:34.963525   33509 command_runner.go:130] > # ]
	I0103 19:35:34.963536   33509 command_runner.go:130] > # List of additional devices. specified as
	I0103 19:35:34.963552   33509 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0103 19:35:34.963564   33509 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0103 19:35:34.963584   33509 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:35:34.963594   33509 command_runner.go:130] > # additional_devices = [
	I0103 19:35:34.963603   33509 command_runner.go:130] > # ]
	I0103 19:35:34.963616   33509 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0103 19:35:34.963625   33509 command_runner.go:130] > # cdi_spec_dirs = [
	I0103 19:35:34.963635   33509 command_runner.go:130] > # 	"/etc/cdi",
	I0103 19:35:34.963644   33509 command_runner.go:130] > # 	"/var/run/cdi",
	I0103 19:35:34.963653   33509 command_runner.go:130] > # ]
	I0103 19:35:34.963665   33509 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0103 19:35:34.963676   33509 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0103 19:35:34.963686   33509 command_runner.go:130] > # Defaults to false.
	I0103 19:35:34.963699   33509 command_runner.go:130] > # device_ownership_from_security_context = false
	I0103 19:35:34.963714   33509 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0103 19:35:34.963726   33509 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0103 19:35:34.963736   33509 command_runner.go:130] > # hooks_dir = [
	I0103 19:35:34.963747   33509 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0103 19:35:34.963756   33509 command_runner.go:130] > # ]
	I0103 19:35:34.963767   33509 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0103 19:35:34.963779   33509 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0103 19:35:34.963791   33509 command_runner.go:130] > # its default mounts from the following two files:
	I0103 19:35:34.963800   33509 command_runner.go:130] > #
	I0103 19:35:34.963812   33509 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0103 19:35:34.963825   33509 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0103 19:35:34.963838   33509 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0103 19:35:34.963847   33509 command_runner.go:130] > #
	I0103 19:35:34.963860   33509 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0103 19:35:34.963871   33509 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0103 19:35:34.963883   33509 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0103 19:35:34.963896   33509 command_runner.go:130] > #      only add mounts it finds in this file.
	I0103 19:35:34.963905   33509 command_runner.go:130] > #
	I0103 19:35:34.963915   33509 command_runner.go:130] > # default_mounts_file = ""
	I0103 19:35:34.963927   33509 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0103 19:35:34.963940   33509 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0103 19:35:34.963950   33509 command_runner.go:130] > pids_limit = 1024
	I0103 19:35:34.963961   33509 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0103 19:35:34.963974   33509 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0103 19:35:34.963987   33509 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0103 19:35:34.964004   33509 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0103 19:35:34.964014   33509 command_runner.go:130] > # log_size_max = -1
	I0103 19:35:34.964028   33509 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0103 19:35:34.964039   33509 command_runner.go:130] > # log_to_journald = false
	I0103 19:35:34.964051   33509 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0103 19:35:34.964059   33509 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0103 19:35:34.964071   33509 command_runner.go:130] > # Path to directory for container attach sockets.
	I0103 19:35:34.964083   33509 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0103 19:35:34.964093   33509 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0103 19:35:34.964104   33509 command_runner.go:130] > # bind_mount_prefix = ""
	I0103 19:35:34.964116   33509 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0103 19:35:34.964126   33509 command_runner.go:130] > # read_only = false
	I0103 19:35:34.964144   33509 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0103 19:35:34.964154   33509 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0103 19:35:34.964164   33509 command_runner.go:130] > # live configuration reload.
	I0103 19:35:34.964174   33509 command_runner.go:130] > # log_level = "info"
	I0103 19:35:34.964187   33509 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0103 19:35:34.964199   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:35:34.964208   33509 command_runner.go:130] > # log_filter = ""
	I0103 19:35:34.964221   33509 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0103 19:35:34.964235   33509 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0103 19:35:34.964244   33509 command_runner.go:130] > # separated by comma.
	I0103 19:35:34.964252   33509 command_runner.go:130] > # uid_mappings = ""
	I0103 19:35:34.964264   33509 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0103 19:35:34.964273   33509 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0103 19:35:34.964281   33509 command_runner.go:130] > # separated by comma.
	I0103 19:35:34.964289   33509 command_runner.go:130] > # gid_mappings = ""
	I0103 19:35:34.964301   33509 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0103 19:35:34.964312   33509 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:35:34.964322   33509 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:35:34.964329   33509 command_runner.go:130] > # minimum_mappable_uid = -1
	I0103 19:35:34.964339   33509 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0103 19:35:34.964352   33509 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:35:34.964364   33509 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:35:34.964374   33509 command_runner.go:130] > # minimum_mappable_gid = -1
	I0103 19:35:34.964384   33509 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0103 19:35:34.964397   33509 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0103 19:35:34.964408   33509 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0103 19:35:34.964417   33509 command_runner.go:130] > # ctr_stop_timeout = 30
	I0103 19:35:34.964429   33509 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0103 19:35:34.964441   33509 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0103 19:35:34.964452   33509 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0103 19:35:34.964461   33509 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0103 19:35:34.964471   33509 command_runner.go:130] > drop_infra_ctr = false
	I0103 19:35:34.964481   33509 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0103 19:35:34.964489   33509 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0103 19:35:34.964505   33509 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0103 19:35:34.964515   33509 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0103 19:35:34.964528   33509 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0103 19:35:34.964541   33509 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0103 19:35:34.964551   33509 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0103 19:35:34.964559   33509 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0103 19:35:34.964567   33509 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0103 19:35:34.964573   33509 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0103 19:35:34.964582   33509 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0103 19:35:34.964588   33509 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0103 19:35:34.964595   33509 command_runner.go:130] > # default_runtime = "runc"
	I0103 19:35:34.964600   33509 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0103 19:35:34.964609   33509 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0103 19:35:34.964618   33509 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0103 19:35:34.964626   33509 command_runner.go:130] > # creation as a file is not desired either.
	I0103 19:35:34.964633   33509 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0103 19:35:34.964641   33509 command_runner.go:130] > # the hostname is being managed dynamically.
	I0103 19:35:34.964645   33509 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0103 19:35:34.964651   33509 command_runner.go:130] > # ]
	I0103 19:35:34.964657   33509 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0103 19:35:34.964666   33509 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0103 19:35:34.964674   33509 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0103 19:35:34.964683   33509 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0103 19:35:34.964688   33509 command_runner.go:130] > #
	I0103 19:35:34.964695   33509 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0103 19:35:34.964700   33509 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0103 19:35:34.964706   33509 command_runner.go:130] > #  runtime_type = "oci"
	I0103 19:35:34.964711   33509 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0103 19:35:34.964717   33509 command_runner.go:130] > #  privileged_without_host_devices = false
	I0103 19:35:34.964722   33509 command_runner.go:130] > #  allowed_annotations = []
	I0103 19:35:34.964728   33509 command_runner.go:130] > # Where:
	I0103 19:35:34.964733   33509 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0103 19:35:34.964742   33509 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0103 19:35:34.964750   33509 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0103 19:35:34.964758   33509 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0103 19:35:34.964764   33509 command_runner.go:130] > #   in $PATH.
	I0103 19:35:34.964770   33509 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0103 19:35:34.964777   33509 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0103 19:35:34.964783   33509 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0103 19:35:34.964789   33509 command_runner.go:130] > #   state.
	I0103 19:35:34.964796   33509 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0103 19:35:34.964804   33509 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0103 19:35:34.964810   33509 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0103 19:35:34.964818   33509 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0103 19:35:34.964824   33509 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0103 19:35:34.964832   33509 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0103 19:35:34.964839   33509 command_runner.go:130] > #   The currently recognized values are:
	I0103 19:35:34.964846   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0103 19:35:34.964855   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0103 19:35:34.964863   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0103 19:35:34.964871   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0103 19:35:34.964880   33509 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0103 19:35:34.964888   33509 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0103 19:35:34.964897   33509 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0103 19:35:34.964903   33509 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0103 19:35:34.964911   33509 command_runner.go:130] > #   should be moved to the container's cgroup
	I0103 19:35:34.964918   33509 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0103 19:35:34.964922   33509 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0103 19:35:34.964928   33509 command_runner.go:130] > runtime_type = "oci"
	I0103 19:35:34.964934   33509 command_runner.go:130] > runtime_root = "/run/runc"
	I0103 19:35:34.964940   33509 command_runner.go:130] > runtime_config_path = ""
	I0103 19:35:34.964947   33509 command_runner.go:130] > monitor_path = ""
	I0103 19:35:34.964953   33509 command_runner.go:130] > monitor_cgroup = ""
	I0103 19:35:34.964957   33509 command_runner.go:130] > monitor_exec_cgroup = ""
	I0103 19:35:34.964966   33509 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0103 19:35:34.964972   33509 command_runner.go:130] > # running containers
	I0103 19:35:34.964976   33509 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0103 19:35:34.964986   33509 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0103 19:35:34.965040   33509 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0103 19:35:34.965053   33509 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0103 19:35:34.965058   33509 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0103 19:35:34.965063   33509 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0103 19:35:34.965067   33509 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0103 19:35:34.965072   33509 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0103 19:35:34.965076   33509 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0103 19:35:34.965082   33509 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0103 19:35:34.965087   33509 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0103 19:35:34.965093   33509 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0103 19:35:34.965100   33509 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0103 19:35:34.965108   33509 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0103 19:35:34.965117   33509 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0103 19:35:34.965125   33509 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0103 19:35:34.965134   33509 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0103 19:35:34.965150   33509 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0103 19:35:34.965159   33509 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0103 19:35:34.965169   33509 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0103 19:35:34.965173   33509 command_runner.go:130] > # Example:
	I0103 19:35:34.965180   33509 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0103 19:35:34.965185   33509 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0103 19:35:34.965192   33509 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0103 19:35:34.965197   33509 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0103 19:35:34.965203   33509 command_runner.go:130] > # cpuset = 0
	I0103 19:35:34.965208   33509 command_runner.go:130] > # cpushares = "0-1"
	I0103 19:35:34.965214   33509 command_runner.go:130] > # Where:
	I0103 19:35:34.965219   33509 command_runner.go:130] > # The workload name is workload-type.
	I0103 19:35:34.965228   33509 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0103 19:35:34.965236   33509 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0103 19:35:34.965243   33509 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0103 19:35:34.965252   33509 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0103 19:35:34.965260   33509 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0103 19:35:34.965266   33509 command_runner.go:130] > # 
	I0103 19:35:34.965272   33509 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0103 19:35:34.965278   33509 command_runner.go:130] > #
	I0103 19:35:34.965284   33509 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0103 19:35:34.965292   33509 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0103 19:35:34.965300   33509 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0103 19:35:34.965309   33509 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0103 19:35:34.965317   33509 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0103 19:35:34.965323   33509 command_runner.go:130] > [crio.image]
	I0103 19:35:34.965329   33509 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0103 19:35:34.965336   33509 command_runner.go:130] > # default_transport = "docker://"
	I0103 19:35:34.965342   33509 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0103 19:35:34.965351   33509 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:35:34.965357   33509 command_runner.go:130] > # global_auth_file = ""
	I0103 19:35:34.965362   33509 command_runner.go:130] > # The image used to instantiate infra containers.
	I0103 19:35:34.965369   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:35:34.965374   33509 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0103 19:35:34.965382   33509 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0103 19:35:34.965390   33509 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:35:34.965400   33509 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:35:34.965405   33509 command_runner.go:130] > # pause_image_auth_file = ""
	I0103 19:35:34.965413   33509 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0103 19:35:34.965422   33509 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0103 19:35:34.965430   33509 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0103 19:35:34.965438   33509 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0103 19:35:34.965443   33509 command_runner.go:130] > # pause_command = "/pause"
	I0103 19:35:34.965451   33509 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0103 19:35:34.965459   33509 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0103 19:35:34.965466   33509 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0103 19:35:34.965475   33509 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0103 19:35:34.965482   33509 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0103 19:35:34.965490   33509 command_runner.go:130] > # signature_policy = ""
	I0103 19:35:34.965499   33509 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0103 19:35:34.965511   33509 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0103 19:35:34.965520   33509 command_runner.go:130] > # changing them here.
	I0103 19:35:34.965527   33509 command_runner.go:130] > # insecure_registries = [
	I0103 19:35:34.965536   33509 command_runner.go:130] > # ]
	I0103 19:35:34.965545   33509 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0103 19:35:34.965556   33509 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0103 19:35:34.965566   33509 command_runner.go:130] > # image_volumes = "mkdir"
	I0103 19:35:34.965577   33509 command_runner.go:130] > # Temporary directory to use for storing big files
	I0103 19:35:34.965585   33509 command_runner.go:130] > # big_files_temporary_dir = ""
	I0103 19:35:34.965594   33509 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0103 19:35:34.965604   33509 command_runner.go:130] > # CNI plugins.
	I0103 19:35:34.965612   33509 command_runner.go:130] > [crio.network]
	I0103 19:35:34.965620   33509 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0103 19:35:34.965628   33509 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0103 19:35:34.965634   33509 command_runner.go:130] > # cni_default_network = ""
	I0103 19:35:34.965640   33509 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0103 19:35:34.965647   33509 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0103 19:35:34.965652   33509 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0103 19:35:34.965658   33509 command_runner.go:130] > # plugin_dirs = [
	I0103 19:35:34.965662   33509 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0103 19:35:34.965666   33509 command_runner.go:130] > # ]
	I0103 19:35:34.965674   33509 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0103 19:35:34.965678   33509 command_runner.go:130] > [crio.metrics]
	I0103 19:35:34.965685   33509 command_runner.go:130] > # Globally enable or disable metrics support.
	I0103 19:35:34.965689   33509 command_runner.go:130] > enable_metrics = true
	I0103 19:35:34.965696   33509 command_runner.go:130] > # Specify enabled metrics collectors.
	I0103 19:35:34.965701   33509 command_runner.go:130] > # Per default all metrics are enabled.
	I0103 19:35:34.965710   33509 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0103 19:35:34.965718   33509 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0103 19:35:34.965726   33509 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0103 19:35:34.965732   33509 command_runner.go:130] > # metrics_collectors = [
	I0103 19:35:34.965737   33509 command_runner.go:130] > # 	"operations",
	I0103 19:35:34.965741   33509 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0103 19:35:34.965748   33509 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0103 19:35:34.965755   33509 command_runner.go:130] > # 	"operations_errors",
	I0103 19:35:34.965765   33509 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0103 19:35:34.965775   33509 command_runner.go:130] > # 	"image_pulls_by_name",
	I0103 19:35:34.965786   33509 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0103 19:35:34.965795   33509 command_runner.go:130] > # 	"image_pulls_failures",
	I0103 19:35:34.965805   33509 command_runner.go:130] > # 	"image_pulls_successes",
	I0103 19:35:34.965815   33509 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0103 19:35:34.965825   33509 command_runner.go:130] > # 	"image_layer_reuse",
	I0103 19:35:34.965832   33509 command_runner.go:130] > # 	"containers_oom_total",
	I0103 19:35:34.965841   33509 command_runner.go:130] > # 	"containers_oom",
	I0103 19:35:34.965851   33509 command_runner.go:130] > # 	"processes_defunct",
	I0103 19:35:34.965860   33509 command_runner.go:130] > # 	"operations_total",
	I0103 19:35:34.965872   33509 command_runner.go:130] > # 	"operations_latency_seconds",
	I0103 19:35:34.965882   33509 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0103 19:35:34.965889   33509 command_runner.go:130] > # 	"operations_errors_total",
	I0103 19:35:34.965894   33509 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0103 19:35:34.965900   33509 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0103 19:35:34.965905   33509 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0103 19:35:34.965912   33509 command_runner.go:130] > # 	"image_pulls_success_total",
	I0103 19:35:34.965916   33509 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0103 19:35:34.965923   33509 command_runner.go:130] > # 	"containers_oom_count_total",
	I0103 19:35:34.965927   33509 command_runner.go:130] > # ]
	I0103 19:35:34.965935   33509 command_runner.go:130] > # The port on which the metrics server will listen.
	I0103 19:35:34.965939   33509 command_runner.go:130] > # metrics_port = 9090
	I0103 19:35:34.965946   33509 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0103 19:35:34.965950   33509 command_runner.go:130] > # metrics_socket = ""
	I0103 19:35:34.965957   33509 command_runner.go:130] > # The certificate for the secure metrics server.
	I0103 19:35:34.965964   33509 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0103 19:35:34.965972   33509 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0103 19:35:34.965979   33509 command_runner.go:130] > # certificate on any modification event.
	I0103 19:35:34.965983   33509 command_runner.go:130] > # metrics_cert = ""
	I0103 19:35:34.965988   33509 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0103 19:35:34.965995   33509 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0103 19:35:34.965999   33509 command_runner.go:130] > # metrics_key = ""
	I0103 19:35:34.966007   33509 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0103 19:35:34.966012   33509 command_runner.go:130] > [crio.tracing]
	I0103 19:35:34.966020   33509 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0103 19:35:34.966026   33509 command_runner.go:130] > # enable_tracing = false
	I0103 19:35:34.966032   33509 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0103 19:35:34.966038   33509 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0103 19:35:34.966043   33509 command_runner.go:130] > # Number of samples to collect per million spans.
	I0103 19:35:34.966050   33509 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0103 19:35:34.966056   33509 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0103 19:35:34.966062   33509 command_runner.go:130] > [crio.stats]
	I0103 19:35:34.966068   33509 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0103 19:35:34.966074   33509 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0103 19:35:34.966081   33509 command_runner.go:130] > # stats_collection_period = 0
	I0103 19:35:34.966115   33509 command_runner.go:130] ! time="2024-01-03 19:35:34.941068915Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0103 19:35:34.966127   33509 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0103 19:35:34.966185   33509 cni.go:84] Creating CNI manager for ""
	I0103 19:35:34.966196   33509 cni.go:136] 3 nodes found, recommending kindnet
	I0103 19:35:34.966207   33509 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:35:34.966233   33509 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484895 NodeName:multinode-484895-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 19:35:34.966349   33509 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-484895-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:35:34.966398   33509 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-484895-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:35:34.966445   33509 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 19:35:34.975787   33509 command_runner.go:130] > kubeadm
	I0103 19:35:34.975810   33509 command_runner.go:130] > kubectl
	I0103 19:35:34.975817   33509 command_runner.go:130] > kubelet
	I0103 19:35:34.975839   33509 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:35:34.975891   33509 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0103 19:35:34.984523   33509 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0103 19:35:35.000545   33509 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 19:35:35.016578   33509 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I0103 19:35:35.020269   33509 command_runner.go:130] > 192.168.39.191	control-plane.minikube.internal
	I0103 19:35:35.020617   33509 host.go:66] Checking if "multinode-484895" exists ...
	I0103 19:35:35.020901   33509 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:35:35.020928   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:35:35.020963   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:35:35.035221   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0103 19:35:35.035618   33509 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:35:35.036111   33509 main.go:141] libmachine: Using API Version  1
	I0103 19:35:35.036132   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:35:35.036422   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:35:35.036608   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:35:35.036757   33509 start.go:304] JoinCluster: &{Name:multinode-484895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-484895 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:35:35.036854   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0103 19:35:35.036867   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:35:35.039604   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:35:35.040023   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:35:35.040045   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:35:35.040166   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:35:35.040386   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:35:35.040592   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:35:35.040777   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:35:35.212938   33509 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token czlxed.dn28ffk3538s5l6g --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 
	I0103 19:35:35.212990   33509 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0103 19:35:35.213018   33509 host.go:66] Checking if "multinode-484895" exists ...
	I0103 19:35:35.213438   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:35:35.213487   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:35:35.227870   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0103 19:35:35.228344   33509 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:35:35.228766   33509 main.go:141] libmachine: Using API Version  1
	I0103 19:35:35.228781   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:35:35.229140   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:35:35.229353   33509 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:35:35.229614   33509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484895-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0103 19:35:35.229643   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:35:35.232487   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:35:35.232831   33509 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:31:29 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:35:35.232872   33509 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:35:35.233077   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:35:35.233261   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:35:35.233424   33509 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:35:35.233563   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:35:35.382463   33509 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0103 19:35:35.437864   33509 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zt7zf, kube-system/kube-proxy-strp6
	I0103 19:35:38.460157   33509 command_runner.go:130] > node/multinode-484895-m03 cordoned
	I0103 19:35:38.460184   33509 command_runner.go:130] > pod "busybox-5bc68d56bd-cgps8" has DeletionTimestamp older than 1 seconds, skipping
	I0103 19:35:38.460193   33509 command_runner.go:130] > node/multinode-484895-m03 drained
	I0103 19:35:38.460225   33509 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484895-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.230578785s)
	I0103 19:35:38.460238   33509 node.go:108] successfully drained node "m03"
	I0103 19:35:38.460649   33509 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:35:38.460966   33509 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:35:38.461307   33509 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0103 19:35:38.461364   33509 round_trippers.go:463] DELETE https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:35:38.461375   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:38.461386   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:38.461396   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:38.461402   33509 round_trippers.go:473]     Content-Type: application/json
	I0103 19:35:38.481602   33509 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0103 19:35:38.481627   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:38.481636   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:38.481645   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:38.481651   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:38.481657   33509 round_trippers.go:580]     Content-Length: 171
	I0103 19:35:38.481662   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:38 GMT
	I0103 19:35:38.481667   33509 round_trippers.go:580]     Audit-Id: bdc9217a-28b4-491d-a602-4eefcfd1b142
	I0103 19:35:38.481673   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:38.481693   33509 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-484895-m03","kind":"nodes","uid":"a1762911-aa8b-49cb-8632-51fb5a4220e2"}}
	I0103 19:35:38.481724   33509 node.go:124] successfully deleted node "m03"
	I0103 19:35:38.481733   33509 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0103 19:35:38.481767   33509 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0103 19:35:38.481786   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token czlxed.dn28ffk3538s5l6g --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-484895-m03"
	I0103 19:35:38.549379   33509 command_runner.go:130] > [preflight] Running pre-flight checks
	I0103 19:35:38.801374   33509 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0103 19:35:38.801413   33509 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0103 19:35:38.873219   33509 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:35:38.873247   33509 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:35:38.873256   33509 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0103 19:35:39.091526   33509 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0103 19:35:39.614319   33509 command_runner.go:130] > This node has joined the cluster:
	I0103 19:35:39.614355   33509 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0103 19:35:39.614366   33509 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0103 19:35:39.614377   33509 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0103 19:35:39.617021   33509 command_runner.go:130] ! W0103 19:35:38.539051    2337 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0103 19:35:39.617058   33509 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0103 19:35:39.617071   33509 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0103 19:35:39.617083   33509 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0103 19:35:39.617102   33509 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token czlxed.dn28ffk3538s5l6g --discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-484895-m03": (1.135303703s)
	I0103 19:35:39.617124   33509 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0103 19:35:39.898724   33509 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=multinode-484895 minikube.k8s.io/updated_at=2024_01_03T19_35_39_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:35:40.002249   33509 command_runner.go:130] > node/multinode-484895-m02 labeled
	I0103 19:35:40.015658   33509 command_runner.go:130] > node/multinode-484895-m03 labeled
	I0103 19:35:40.017260   33509 start.go:306] JoinCluster complete in 4.980497793s
	I0103 19:35:40.017299   33509 cni.go:84] Creating CNI manager for ""
	I0103 19:35:40.017306   33509 cni.go:136] 3 nodes found, recommending kindnet
	I0103 19:35:40.017362   33509 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 19:35:40.025966   33509 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0103 19:35:40.025999   33509 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0103 19:35:40.026009   33509 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0103 19:35:40.026020   33509 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:35:40.026033   33509 command_runner.go:130] > Access: 2024-01-03 19:31:29.762982388 +0000
	I0103 19:35:40.026047   33509 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0103 19:35:40.026061   33509 command_runner.go:130] > Change: 2024-01-03 19:31:27.994982388 +0000
	I0103 19:35:40.026067   33509 command_runner.go:130] >  Birth: -
	I0103 19:35:40.026121   33509 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 19:35:40.026138   33509 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 19:35:40.045873   33509 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 19:35:40.376805   33509 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:35:40.380640   33509 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:35:40.383668   33509 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0103 19:35:40.393756   33509 command_runner.go:130] > daemonset.apps/kindnet configured
	I0103 19:35:40.397128   33509 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:35:40.397375   33509 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:35:40.397752   33509 round_trippers.go:463] GET https://192.168.39.191:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:35:40.397765   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.397773   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.397779   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.400451   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:40.400469   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.400476   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.400481   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.400487   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.400492   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.400497   33509 round_trippers.go:580]     Content-Length: 291
	I0103 19:35:40.400503   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.400510   33509 round_trippers.go:580]     Audit-Id: dcea8f2d-b393-4439-9ef6-bb6dff740258
	I0103 19:35:40.400531   33509 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e2317390-8a66-46be-8656-5adca86177ea","resourceVersion":"854","creationTimestamp":"2024-01-03T19:21:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0103 19:35:40.400611   33509 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484895" context rescaled to 1 replicas
	I0103 19:35:40.400636   33509 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0103 19:35:40.402839   33509 out.go:177] * Verifying Kubernetes components...
	I0103 19:35:40.404354   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:35:40.418933   33509 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:35:40.419192   33509 kapi.go:59] client config for multinode-484895: &rest.Config{Host:"https://192.168.39.191:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/multinode-484895/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:35:40.419417   33509 node_ready.go:35] waiting up to 6m0s for node "multinode-484895-m03" to be "Ready" ...
	I0103 19:35:40.419505   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:35:40.419517   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.419527   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.419536   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.422098   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:40.422116   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.422123   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.422129   33509 round_trippers.go:580]     Audit-Id: a2742f5b-8c29-4c37-881d-1fb3c29b4d50
	I0103 19:35:40.422134   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.422139   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.422144   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.422152   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.422316   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m03","uid":"fbdb9e11-1f68-4575-ac26-040913541120","resourceVersion":"1187","creationTimestamp":"2024-01-03T19:35:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_35_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:35:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0103 19:35:40.422680   33509 node_ready.go:49] node "multinode-484895-m03" has status "Ready":"True"
	I0103 19:35:40.422701   33509 node_ready.go:38] duration metric: took 3.266091ms waiting for node "multinode-484895-m03" to be "Ready" ...
	I0103 19:35:40.422714   33509 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:35:40.422775   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods
	I0103 19:35:40.422786   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.422793   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.422798   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.426449   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:35:40.426475   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.426486   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.426494   33509 round_trippers.go:580]     Audit-Id: 6066c90a-7944-40fa-beb8-ca0d0070fbc1
	I0103 19:35:40.426503   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.426511   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.426530   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.426539   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.427337   33509 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1191"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"833","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82237 chars]
	I0103 19:35:40.430737   33509 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.430843   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wzsqb
	I0103 19:35:40.430854   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.430866   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.430878   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.433286   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:40.433308   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.433317   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.433325   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.433333   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.433340   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.433348   33509 round_trippers.go:580]     Audit-Id: 089bef1e-7aad-45be-9b2b-d721a1863873
	I0103 19:35:40.433358   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.433465   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wzsqb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa","resourceVersion":"833","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e9219a81-ca58-4a90-b963-60ed0c2d0b1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9219a81-ca58-4a90-b963-60ed0c2d0b1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0103 19:35:40.433932   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:35:40.433947   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.433955   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.433960   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.436299   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:40.436324   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.436334   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.436342   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.436349   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.436357   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.436366   33509 round_trippers.go:580]     Audit-Id: c0436b6c-b505-471e-85a1-35517bff172f
	I0103 19:35:40.436378   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.436597   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:35:40.436987   33509 pod_ready.go:92] pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace has status "Ready":"True"
	I0103 19:35:40.437006   33509 pod_ready.go:81] duration metric: took 6.244014ms waiting for pod "coredns-5dd5756b68-wzsqb" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.437014   33509 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.437059   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484895
	I0103 19:35:40.437067   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.437074   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.437080   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.439115   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:40.439131   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.439140   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.439148   33509 round_trippers.go:580]     Audit-Id: b373b5db-72b1-44ea-be83-9889b0dd6c8f
	I0103 19:35:40.439156   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.439164   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.439174   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.439187   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.439275   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484895","namespace":"kube-system","uid":"2b5f9dc7-2d61-4968-9b9a-cfc029c9522b","resourceVersion":"825","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.191:2379","kubernetes.io/config.hash":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.mirror":"9bc39430cce393fdab624e5093adf15c","kubernetes.io/config.seen":"2024-01-03T19:21:43.948366778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0103 19:35:40.439693   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:35:40.439710   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.439721   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.439735   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.441545   33509 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:35:40.441559   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.441566   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.441572   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.441581   33509 round_trippers.go:580]     Audit-Id: a8dd1a21-c523-4bc8-927f-64138305d2d6
	I0103 19:35:40.441589   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.441596   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.441604   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.441880   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:35:40.442150   33509 pod_ready.go:92] pod "etcd-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:35:40.442163   33509 pod_ready.go:81] duration metric: took 5.144211ms waiting for pod "etcd-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.442178   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.442222   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484895
	I0103 19:35:40.442229   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.442236   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.442242   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.444235   33509 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:35:40.444255   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.444264   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.444273   33509 round_trippers.go:580]     Audit-Id: 07f859d1-1b9c-49ea-8279-94ad2bf38547
	I0103 19:35:40.444280   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.444290   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.444297   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.444305   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.444429   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484895","namespace":"kube-system","uid":"f9f36416-b761-4534-8e09-bc3c94813149","resourceVersion":"827","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.191:8443","kubernetes.io/config.hash":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.mirror":"2adb5a2561f637a585e38e2b73f2b809","kubernetes.io/config.seen":"2024-01-03T19:21:43.948370781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0103 19:35:40.444893   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:35:40.444911   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.444917   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.444923   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.446821   33509 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:35:40.446836   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.446843   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.446848   33509 round_trippers.go:580]     Audit-Id: 40a9a345-c06e-4de4-8103-27b0c000835e
	I0103 19:35:40.446853   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.446859   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.446865   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.446870   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.447035   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:35:40.447324   33509 pod_ready.go:92] pod "kube-apiserver-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:35:40.447337   33509 pod_ready.go:81] duration metric: took 5.152989ms waiting for pod "kube-apiserver-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.447345   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.447395   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484895
	I0103 19:35:40.447402   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.447409   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.447415   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.449319   33509 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:35:40.449333   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.449339   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.449345   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.449350   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.449355   33509 round_trippers.go:580]     Audit-Id: 85a4b8ed-0ddf-4fc9-be20-be4a2ee4f79c
	I0103 19:35:40.449360   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.449365   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.449511   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484895","namespace":"kube-system","uid":"a04de258-1f92-4ac7-8f30-18ad9ebb6d40","resourceVersion":"838","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.mirror":"091c426717be69d480bcc59d28e953ce","kubernetes.io/config.seen":"2024-01-03T19:21:43.948371847Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0103 19:35:40.449926   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:35:40.449944   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.449952   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.449958   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.453158   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:35:40.453181   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.453191   33509 round_trippers.go:580]     Audit-Id: 549a2ad3-5362-426b-9e5b-d24b8567d621
	I0103 19:35:40.453200   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.453208   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.453217   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.453225   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.453232   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.453357   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:35:40.453754   33509 pod_ready.go:92] pod "kube-controller-manager-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:35:40.453778   33509 pod_ready.go:81] duration metric: took 6.426021ms waiting for pod "kube-controller-manager-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.453791   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.620196   33509 request.go:629] Waited for 166.340909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:35:40.620281   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7jnm
	I0103 19:35:40.620289   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.620304   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.620317   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.622573   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:40.622594   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.622601   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.622606   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.622614   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.622620   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.622629   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.622637   33509 round_trippers.go:580]     Audit-Id: 7967e58c-5c14-43fa-a487-b5af252a6b43
	I0103 19:35:40.622814   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k7jnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b0bd9f4-9da5-42c6-83a4-0a3f05f640b3","resourceVersion":"1014","creationTimestamp":"2024-01-03T19:22:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:22:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0103 19:35:40.819672   33509 request.go:629] Waited for 196.331993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:35:40.819738   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m02
	I0103 19:35:40.819743   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:40.819751   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:40.819758   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:40.824012   33509 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:35:40.824036   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:40.824044   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:40.824049   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:40 GMT
	I0103 19:35:40.824054   33509 round_trippers.go:580]     Audit-Id: c8857bc3-ec11-48f9-95db-f8485ff47118
	I0103 19:35:40.824059   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:40.824064   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:40.824073   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:40.824195   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m02","uid":"26e72b14-f775-4f90-838e-83277742fe57","resourceVersion":"1186","creationTimestamp":"2024-01-03T19:33:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_35_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:33:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0103 19:35:40.824451   33509 pod_ready.go:92] pod "kube-proxy-k7jnm" in "kube-system" namespace has status "Ready":"True"
	I0103 19:35:40.824464   33509 pod_ready.go:81] duration metric: took 370.662757ms waiting for pod "kube-proxy-k7jnm" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:40.824474   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-strp6" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:41.019630   33509 request.go:629] Waited for 195.095659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:35:41.019710   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:35:41.019716   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:41.019723   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:41.019729   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:41.022399   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:41.022423   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:41.022432   33509 round_trippers.go:580]     Audit-Id: e3ff5a9c-504f-4bea-9d25-549f59e0d7b5
	I0103 19:35:41.022441   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:41.022448   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:41.022455   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:41.022463   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:41.022471   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:41 GMT
	I0103 19:35:41.022634   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-strp6","generateName":"kube-proxy-","namespace":"kube-system","uid":"f16942b4-2697-4fd7-88f7-3699e16bff79","resourceVersion":"1154","creationTimestamp":"2024-01-03T19:23:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:23:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0103 19:35:41.220490   33509 request.go:629] Waited for 197.425757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:35:41.220567   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:35:41.220573   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:41.220581   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:41.220590   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:41.223724   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:35:41.223751   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:41.223759   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:41.223765   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:41.223770   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:41.223775   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:41.223780   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:41 GMT
	I0103 19:35:41.223789   33509 round_trippers.go:580]     Audit-Id: ea57bd47-3360-4e9e-84a4-9c5f1dcded38
	I0103 19:35:41.224167   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m03","uid":"fbdb9e11-1f68-4575-ac26-040913541120","resourceVersion":"1187","creationTimestamp":"2024-01-03T19:35:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_35_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:35:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0103 19:35:41.419648   33509 request.go:629] Waited for 94.185854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:35:41.419726   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-strp6
	I0103 19:35:41.419738   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:41.419746   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:41.419752   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:41.422458   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:41.422481   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:41.422490   33509 round_trippers.go:580]     Audit-Id: f91c0099-0e47-490a-965e-906f09ba4b49
	I0103 19:35:41.422498   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:41.422506   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:41.422532   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:41.422543   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:41.422555   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:41 GMT
	I0103 19:35:41.422712   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-strp6","generateName":"kube-proxy-","namespace":"kube-system","uid":"f16942b4-2697-4fd7-88f7-3699e16bff79","resourceVersion":"1203","creationTimestamp":"2024-01-03T19:23:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:23:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0103 19:35:41.620312   33509 request.go:629] Waited for 197.172372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:35:41.620366   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895-m03
	I0103 19:35:41.620383   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:41.620394   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:41.620404   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:41.623207   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:41.623226   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:41.623233   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:41.623238   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:41.623244   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:41.623249   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:41 GMT
	I0103 19:35:41.623255   33509 round_trippers.go:580]     Audit-Id: 53709fd4-8a25-445a-b7f3-9406494e66d1
	I0103 19:35:41.623260   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:41.623431   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895-m03","uid":"fbdb9e11-1f68-4575-ac26-040913541120","resourceVersion":"1187","creationTimestamp":"2024-01-03T19:35:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_35_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:35:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0103 19:35:41.623773   33509 pod_ready.go:92] pod "kube-proxy-strp6" in "kube-system" namespace has status "Ready":"True"
	I0103 19:35:41.623794   33509 pod_ready.go:81] duration metric: took 799.314358ms waiting for pod "kube-proxy-strp6" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:41.623810   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:41.820258   33509 request.go:629] Waited for 196.375101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:35:41.820329   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp9s2
	I0103 19:35:41.820334   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:41.820341   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:41.820351   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:41.823452   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:35:41.823473   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:41.823480   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:41.823485   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:41.823491   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:41.823496   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:41 GMT
	I0103 19:35:41.823501   33509 round_trippers.go:580]     Audit-Id: d3f8a34a-71b8-4c57-968b-c5942ae95fb1
	I0103 19:35:41.823506   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:41.823809   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp9s2","generateName":"kube-proxy-","namespace":"kube-system","uid":"728b1db9-b145-4ad3-b366-7fd8306d7a2a","resourceVersion":"757","creationTimestamp":"2024-01-03T19:21:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93e45959-afd7-4869-a648-321076d75f45","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93e45959-afd7-4869-a648-321076d75f45\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0103 19:35:42.020578   33509 request.go:629] Waited for 196.348807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:35:42.020637   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:35:42.020643   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:42.020650   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:42.020656   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:42.023431   33509 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:35:42.023455   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:42.023465   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:42.023473   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:42.023481   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:42.023489   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:42.023497   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:42 GMT
	I0103 19:35:42.023508   33509 round_trippers.go:580]     Audit-Id: 135bb168-9f4a-4410-ab4d-e626af7c1fec
	I0103 19:35:42.023669   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:35:42.024029   33509 pod_ready.go:92] pod "kube-proxy-tp9s2" in "kube-system" namespace has status "Ready":"True"
	I0103 19:35:42.024048   33509 pod_ready.go:81] duration metric: took 400.225889ms waiting for pod "kube-proxy-tp9s2" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:42.024058   33509 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:42.219990   33509 request.go:629] Waited for 195.877986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:35:42.220053   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484895
	I0103 19:35:42.220057   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:42.220065   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:42.220071   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:42.223099   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:35:42.223134   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:42.223145   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:42.223154   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:42.223161   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:42 GMT
	I0103 19:35:42.223170   33509 round_trippers.go:580]     Audit-Id: 065b8c48-f9a1-408f-a6ae-164dc0a1ed0d
	I0103 19:35:42.223183   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:42.223191   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:42.223296   33509 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484895","namespace":"kube-system","uid":"f981e6c0-1f4a-44ed-b043-c69ef28b4fa5","resourceVersion":"841","creationTimestamp":"2024-01-03T19:21:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.mirror":"2de4242735fdb53c42fed3daf21e4e5e","kubernetes.io/config.seen":"2024-01-03T19:21:43.948372698Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:21:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0103 19:35:42.420108   33509 request.go:629] Waited for 196.387652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:35:42.420175   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes/multinode-484895
	I0103 19:35:42.420184   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:42.420193   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:42.420215   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:42.423546   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:35:42.423569   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:42.423580   33509 round_trippers.go:580]     Audit-Id: 65c65eee-ac86-4dab-9264-dab2531b8289
	I0103 19:35:42.423589   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:42.423598   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:42.423607   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:42.423615   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:42.423621   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:42 GMT
	I0103 19:35:42.424197   33509 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:21:40Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0103 19:35:42.424495   33509 pod_ready.go:92] pod "kube-scheduler-multinode-484895" in "kube-system" namespace has status "Ready":"True"
	I0103 19:35:42.424512   33509 pod_ready.go:81] duration metric: took 400.44773ms waiting for pod "kube-scheduler-multinode-484895" in "kube-system" namespace to be "Ready" ...
	I0103 19:35:42.424526   33509 pod_ready.go:38] duration metric: took 2.001795925s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:35:42.424551   33509 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:35:42.424609   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:35:42.438474   33509 system_svc.go:56] duration metric: took 13.915256ms WaitForService to wait for kubelet.
	I0103 19:35:42.438506   33509 kubeadm.go:581] duration metric: took 2.037848717s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:35:42.438540   33509 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:35:42.619976   33509 request.go:629] Waited for 181.366324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.191:8443/api/v1/nodes
	I0103 19:35:42.620054   33509 round_trippers.go:463] GET https://192.168.39.191:8443/api/v1/nodes
	I0103 19:35:42.620062   33509 round_trippers.go:469] Request Headers:
	I0103 19:35:42.620071   33509 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:35:42.620081   33509 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:35:42.623157   33509 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:35:42.623185   33509 round_trippers.go:577] Response Headers:
	I0103 19:35:42.623195   33509 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:35:42.623207   33509 round_trippers.go:580]     Content-Type: application/json
	I0103 19:35:42.623216   33509 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 68d20ed2-7935-43f0-b9f5-422a010bebb4
	I0103 19:35:42.623223   33509 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a82d7be6-095f-471b-9589-97ab201ff3e5
	I0103 19:35:42.623228   33509 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:35:42 GMT
	I0103 19:35:42.623236   33509 round_trippers.go:580]     Audit-Id: db61c3ce-d589-4b92-a993-674f56c4a154
	I0103 19:35:42.623447   33509 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"multinode-484895","uid":"111f4fef-8252-42dd-842b-6abb4aa05059","resourceVersion":"865","creationTimestamp":"2024-01-03T19:21:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484895","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-484895","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_21_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16238 chars]
	I0103 19:35:42.623996   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:35:42.624014   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:35:42.624023   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:35:42.624027   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:35:42.624031   33509 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 19:35:42.624035   33509 node_conditions.go:123] node cpu capacity is 2
	I0103 19:35:42.624038   33509 node_conditions.go:105] duration metric: took 185.492848ms to run NodePressure ...
	I0103 19:35:42.624049   33509 start.go:228] waiting for startup goroutines ...
	I0103 19:35:42.624066   33509 start.go:242] writing updated cluster config ...
	I0103 19:35:42.624344   33509 ssh_runner.go:195] Run: rm -f paused
	I0103 19:35:42.672100   33509 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 19:35:42.675065   33509 out.go:177] * Done! kubectl is now configured to use "multinode-484895" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 19:31:28 UTC, ends at Wed 2024-01-03 19:35:43 UTC. --
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.820823423Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9ad859f5a9bcd6c6a0bd2338832f921639d6b6ec78c023cab3858bf9f099d65f,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wzsqb,Uid:9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704310330663967936,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wzsqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:32:02.779738516Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8e40852ac42f052a416e68e27de4173aa75192e18dca1106033627c7e4640c5,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-xlczw,Uid:442f70d7-17de-4ec1-99e0-f13f530e2d0f,Namespace:default,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1704310330653747942,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-xlczw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 442f70d7-17de-4ec1-99e0-f13f530e2d0f,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:32:02.779743250Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b585a6341f70421a45eecaef3d2cd20c83d94ba44b42401b53a0334666cae97,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:82edd1c3-f361-4f86-8d59-8b89193d7a31,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704310323163711952,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7a31,},Annotations:map[string]st
ring{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-03T19:32:02.779736040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5da6d4fd859b659911cf281725324511ccd2b5cb9f0d7cd33c449adf1fd5e65f,Metadata:&PodSandboxMetadata{Name:kube-proxy-tp9s2,Uid:728b1db9-b145-4ad3-b366-7fd8306d7a2a,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1704310323153694995,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tp9s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1db9-b145-4ad3-b366-7fd8306d7a2a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:32:02.779742213Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:845a2f085df7464c3392747b06eec750c5a114c0c1ea9ea34bd724df18a0c36b,Metadata:&PodSandboxMetadata{Name:kindnet-gqgk2,Uid:8d4f9028-52ad-44dd-83be-0bb7cc590b7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704310323121877240,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-gqgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d4f9028-52ad-44dd-83be-0bb7cc590b7f,k8s-app: kindnet,pod-template-genera
tion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:32:02.779737369Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e91860d1941517ecc1424957032ddb90c4169e3822b66cf3f36ee4d80813a45,Metadata:&PodSandboxMetadata{Name:etcd-multinode-484895,Uid:9bc39430cce393fdab624e5093adf15c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704310316325163278,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc39430cce393fdab624e5093adf15c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.191:2379,kubernetes.io/config.hash: 9bc39430cce393fdab624e5093adf15c,kubernetes.io/config.seen: 2024-01-03T19:31:55.781153843Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9acd93d01c36fa6bef9a0596a0ae506dd6ecfc877013f618f8f0e7649f4419e7,Metad
ata:&PodSandboxMetadata{Name:kube-scheduler-multinode-484895,Uid:2de4242735fdb53c42fed3daf21e4e5e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704310316315826838,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de4242735fdb53c42fed3daf21e4e5e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2de4242735fdb53c42fed3daf21e4e5e,kubernetes.io/config.seen: 2024-01-03T19:31:55.781166543Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:70282a6d5ef37c891300e963d0ed7d6ee141c9f1151d5feaaae8f33e17b14676,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-484895,Uid:2adb5a2561f637a585e38e2b73f2b809,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704310316300349260,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube
-apiserver-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adb5a2561f637a585e38e2b73f2b809,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.191:8443,kubernetes.io/config.hash: 2adb5a2561f637a585e38e2b73f2b809,kubernetes.io/config.seen: 2024-01-03T19:31:55.781159088Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:553f7805016660bc01f8975aa885ab92e997c47c6b30361ccb34d2f92f711b82,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-484895,Uid:091c426717be69d480bcc59d28e953ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704310316273979763,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 091c426717be69d480bcc59d28e953ce,tier: control-plane,},Annotations:map[string]string{kubern
etes.io/config.hash: 091c426717be69d480bcc59d28e953ce,kubernetes.io/config.seen: 2024-01-03T19:31:55.781165106Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=42d0414f-5922-4c9c-b185-0b3e62c88c52 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.821796293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dd0caec7-5bd4-4894-9f07-4bcb80010b9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.821876751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd0caec7-5bd4-4894-9f07-4bcb80010b9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.822146442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44918517a339e1e01605d6cad7cf09c87dbd39375915efdf2b6facd7f77402bf,PodSandboxId:0b585a6341f70421a45eecaef3d2cd20c83d94ba44b42401b53a0334666cae97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704310354029800860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdc96a0c723162239ea0bec466562e51f06a003a30423023e9e82d53b7f151,PodSandboxId:e8e40852ac42f052a416e68e27de4173aa75192e18dca1106033627c7e4640c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704310333928431339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xlczw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 442f70d7-17de-4ec1-99e0-f13f530e2d0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca5df3d1,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b13897e53f6572afe101cf4028e71b0fc165dd98e6c05e26373a710f37fe36,PodSandboxId:9ad859f5a9bcd6c6a0bd2338832f921639d6b6ec78c023cab3858bf9f099d65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704310331332373121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wzsqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,},Annotations:map[string]string{io.kubernetes.container.hash: bc1d7ac1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed956dc2bced590a0d5949b22d3c5df45fa86cf9af5ad61820e3f3f2a19166d,PodSandboxId:845a2f085df7464c3392747b06eec750c5a114c0c1ea9ea34bd724df18a0c36b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704310326083203964,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gqgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8d4f9028-52ad-44dd-83be-0bb7cc590b7f,},Annotations:map[string]string{io.kubernetes.container.hash: a3804f48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c7c70d79a485a51b4e4c882099feec53a23ad345af078adec8a89a02a1d01,PodSandboxId:5da6d4fd859b659911cf281725324511ccd2b5cb9f0d7cd33c449adf1fd5e65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704310323702621315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1db9-b145-4ad3-b366-7fd830
6d7a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9fa95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c28d5ea3a9e66ac26df364685e0bd66b9f25e3e499b575db8a99c538a12d5364,PodSandboxId:5e91860d1941517ecc1424957032ddb90c4169e3822b66cf3f36ee4d80813a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704310317284963763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc39430cce393fdab624e5093adf15c,},Annotations:map[string]string{io.kubernetes.
container.hash: 447693cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967a597033a853b4b9ff75cf92ecad5d206f029c1ac02a56960c738d24762501,PodSandboxId:9acd93d01c36fa6bef9a0596a0ae506dd6ecfc877013f618f8f0e7649f4419e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704310317160248079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de4242735fdb53c42fed3daf21e4e5e,},Annotations:map[string]string{io.kubernetes.container.ha
sh: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75eb16dcb8a9e2e5d44f447f0342fcd9ba6153fd65f4112d1f8a1289ff8acb,PodSandboxId:70282a6d5ef37c891300e963d0ed7d6ee141c9f1151d5feaaae8f33e17b14676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704310316909921721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adb5a2561f637a585e38e2b73f2b809,},Annotations:map[string]string{io.kubernetes.container.hash: 7933f556
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41623edda2ce3e7537e196960a2f155a1fc818441f7771986017b7a1c44dc2da,PodSandboxId:553f7805016660bc01f8975aa885ab92e997c47c6b30361ccb34d2f92f711b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704310316823539382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 091c426717be69d480bcc59d28e953ce,},Annotations:map[string]string{io.kubernetes.
container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd0caec7-5bd4-4894-9f07-4bcb80010b9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.836766296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=610d6694-fcd4-4aa7-a946-8d32fb800800 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.836841482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=610d6694-fcd4-4aa7-a946-8d32fb800800 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.838376237Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8fd38e95-0347-4ecf-9a49-585f45703513 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.838809128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704310543838792127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8fd38e95-0347-4ecf-9a49-585f45703513 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.839338841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a72c6465-414e-4a8a-a4fb-75ecb1cc5737 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.839408373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a72c6465-414e-4a8a-a4fb-75ecb1cc5737 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.839615843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44918517a339e1e01605d6cad7cf09c87dbd39375915efdf2b6facd7f77402bf,PodSandboxId:0b585a6341f70421a45eecaef3d2cd20c83d94ba44b42401b53a0334666cae97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704310354029800860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdc96a0c723162239ea0bec466562e51f06a003a30423023e9e82d53b7f151,PodSandboxId:e8e40852ac42f052a416e68e27de4173aa75192e18dca1106033627c7e4640c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704310333928431339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xlczw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 442f70d7-17de-4ec1-99e0-f13f530e2d0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca5df3d1,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b13897e53f6572afe101cf4028e71b0fc165dd98e6c05e26373a710f37fe36,PodSandboxId:9ad859f5a9bcd6c6a0bd2338832f921639d6b6ec78c023cab3858bf9f099d65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704310331332373121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wzsqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,},Annotations:map[string]string{io.kubernetes.container.hash: bc1d7ac1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed956dc2bced590a0d5949b22d3c5df45fa86cf9af5ad61820e3f3f2a19166d,PodSandboxId:845a2f085df7464c3392747b06eec750c5a114c0c1ea9ea34bd724df18a0c36b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704310326083203964,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gqgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8d4f9028-52ad-44dd-83be-0bb7cc590b7f,},Annotations:map[string]string{io.kubernetes.container.hash: a3804f48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c7c70d79a485a51b4e4c882099feec53a23ad345af078adec8a89a02a1d01,PodSandboxId:5da6d4fd859b659911cf281725324511ccd2b5cb9f0d7cd33c449adf1fd5e65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704310323702621315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1db9-b145-4ad3-b366-7fd830
6d7a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9fa95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0425b1691836f1298fad30060609c3806e04aabc02997cc20e893e1e3cb72e,PodSandboxId:0b585a6341f70421a45eecaef3d2cd20c83d94ba44b42401b53a0334666cae97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704310323643413610,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7
a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c28d5ea3a9e66ac26df364685e0bd66b9f25e3e499b575db8a99c538a12d5364,PodSandboxId:5e91860d1941517ecc1424957032ddb90c4169e3822b66cf3f36ee4d80813a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704310317284963763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc39430cce393fdab624e5093adf15c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 447693cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967a597033a853b4b9ff75cf92ecad5d206f029c1ac02a56960c738d24762501,PodSandboxId:9acd93d01c36fa6bef9a0596a0ae506dd6ecfc877013f618f8f0e7649f4419e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704310317160248079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de4242735fdb53c42fed3daf21e4e5e,},Annotations:map[string]string{io.kubernetes.container.hash
: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75eb16dcb8a9e2e5d44f447f0342fcd9ba6153fd65f4112d1f8a1289ff8acb,PodSandboxId:70282a6d5ef37c891300e963d0ed7d6ee141c9f1151d5feaaae8f33e17b14676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704310316909921721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adb5a2561f637a585e38e2b73f2b809,},Annotations:map[string]string{io.kubernetes.container.hash: 7933f556,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41623edda2ce3e7537e196960a2f155a1fc818441f7771986017b7a1c44dc2da,PodSandboxId:553f7805016660bc01f8975aa885ab92e997c47c6b30361ccb34d2f92f711b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704310316823539382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 091c426717be69d480bcc59d28e953ce,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a72c6465-414e-4a8a-a4fb-75ecb1cc5737 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.883441636Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4874a20f-1cf6-468f-b148-c8b26d3176aa name=/runtime.v1.RuntimeService/Version
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.883516465Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4874a20f-1cf6-468f-b148-c8b26d3176aa name=/runtime.v1.RuntimeService/Version
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.884883029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f76d759d-bf04-4bc7-a0ef-5b8f1b198bfb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.885394413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704310543885378209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f76d759d-bf04-4bc7-a0ef-5b8f1b198bfb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.886074425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a5671ba1-a391-470f-b97c-d0be43cbac15 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.886142856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a5671ba1-a391-470f-b97c-d0be43cbac15 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.886352112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44918517a339e1e01605d6cad7cf09c87dbd39375915efdf2b6facd7f77402bf,PodSandboxId:0b585a6341f70421a45eecaef3d2cd20c83d94ba44b42401b53a0334666cae97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704310354029800860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdc96a0c723162239ea0bec466562e51f06a003a30423023e9e82d53b7f151,PodSandboxId:e8e40852ac42f052a416e68e27de4173aa75192e18dca1106033627c7e4640c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704310333928431339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xlczw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 442f70d7-17de-4ec1-99e0-f13f530e2d0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca5df3d1,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b13897e53f6572afe101cf4028e71b0fc165dd98e6c05e26373a710f37fe36,PodSandboxId:9ad859f5a9bcd6c6a0bd2338832f921639d6b6ec78c023cab3858bf9f099d65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704310331332373121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wzsqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,},Annotations:map[string]string{io.kubernetes.container.hash: bc1d7ac1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed956dc2bced590a0d5949b22d3c5df45fa86cf9af5ad61820e3f3f2a19166d,PodSandboxId:845a2f085df7464c3392747b06eec750c5a114c0c1ea9ea34bd724df18a0c36b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704310326083203964,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gqgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8d4f9028-52ad-44dd-83be-0bb7cc590b7f,},Annotations:map[string]string{io.kubernetes.container.hash: a3804f48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c7c70d79a485a51b4e4c882099feec53a23ad345af078adec8a89a02a1d01,PodSandboxId:5da6d4fd859b659911cf281725324511ccd2b5cb9f0d7cd33c449adf1fd5e65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704310323702621315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1db9-b145-4ad3-b366-7fd830
6d7a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9fa95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0425b1691836f1298fad30060609c3806e04aabc02997cc20e893e1e3cb72e,PodSandboxId:0b585a6341f70421a45eecaef3d2cd20c83d94ba44b42401b53a0334666cae97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704310323643413610,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7
a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c28d5ea3a9e66ac26df364685e0bd66b9f25e3e499b575db8a99c538a12d5364,PodSandboxId:5e91860d1941517ecc1424957032ddb90c4169e3822b66cf3f36ee4d80813a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704310317284963763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc39430cce393fdab624e5093adf15c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 447693cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967a597033a853b4b9ff75cf92ecad5d206f029c1ac02a56960c738d24762501,PodSandboxId:9acd93d01c36fa6bef9a0596a0ae506dd6ecfc877013f618f8f0e7649f4419e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704310317160248079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de4242735fdb53c42fed3daf21e4e5e,},Annotations:map[string]string{io.kubernetes.container.hash
: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75eb16dcb8a9e2e5d44f447f0342fcd9ba6153fd65f4112d1f8a1289ff8acb,PodSandboxId:70282a6d5ef37c891300e963d0ed7d6ee141c9f1151d5feaaae8f33e17b14676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704310316909921721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adb5a2561f637a585e38e2b73f2b809,},Annotations:map[string]string{io.kubernetes.container.hash: 7933f556,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41623edda2ce3e7537e196960a2f155a1fc818441f7771986017b7a1c44dc2da,PodSandboxId:553f7805016660bc01f8975aa885ab92e997c47c6b30361ccb34d2f92f711b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704310316823539382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 091c426717be69d480bcc59d28e953ce,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a5671ba1-a391-470f-b97c-d0be43cbac15 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.934514335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a3ef3339-4fab-4d6a-90a2-dddff79aff18 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.934604417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a3ef3339-4fab-4d6a-90a2-dddff79aff18 name=/runtime.v1.RuntimeService/Version
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.936486510Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=00fb799e-3319-4b98-bf4f-c7fb820e9462 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.936941951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704310543936913664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=00fb799e-3319-4b98-bf4f-c7fb820e9462 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.938594925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0aa27c39-b6f7-44fc-8e50-b77a333d0867 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.938755804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0aa27c39-b6f7-44fc-8e50-b77a333d0867 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 19:35:43 multinode-484895 crio[707]: time="2024-01-03 19:35:43.939033699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44918517a339e1e01605d6cad7cf09c87dbd39375915efdf2b6facd7f77402bf,PodSandboxId:0b585a6341f70421a45eecaef3d2cd20c83d94ba44b42401b53a0334666cae97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704310354029800860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdc96a0c723162239ea0bec466562e51f06a003a30423023e9e82d53b7f151,PodSandboxId:e8e40852ac42f052a416e68e27de4173aa75192e18dca1106033627c7e4640c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704310333928431339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xlczw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 442f70d7-17de-4ec1-99e0-f13f530e2d0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca5df3d1,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b13897e53f6572afe101cf4028e71b0fc165dd98e6c05e26373a710f37fe36,PodSandboxId:9ad859f5a9bcd6c6a0bd2338832f921639d6b6ec78c023cab3858bf9f099d65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704310331332373121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wzsqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa,},Annotations:map[string]string{io.kubernetes.container.hash: bc1d7ac1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed956dc2bced590a0d5949b22d3c5df45fa86cf9af5ad61820e3f3f2a19166d,PodSandboxId:845a2f085df7464c3392747b06eec750c5a114c0c1ea9ea34bd724df18a0c36b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704310326083203964,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gqgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8d4f9028-52ad-44dd-83be-0bb7cc590b7f,},Annotations:map[string]string{io.kubernetes.container.hash: a3804f48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c7c70d79a485a51b4e4c882099feec53a23ad345af078adec8a89a02a1d01,PodSandboxId:5da6d4fd859b659911cf281725324511ccd2b5cb9f0d7cd33c449adf1fd5e65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704310323702621315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1db9-b145-4ad3-b366-7fd830
6d7a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9fa95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0425b1691836f1298fad30060609c3806e04aabc02997cc20e893e1e3cb72e,PodSandboxId:0b585a6341f70421a45eecaef3d2cd20c83d94ba44b42401b53a0334666cae97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704310323643413610,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82edd1c3-f361-4f86-8d59-8b89193d7
a31,},Annotations:map[string]string{io.kubernetes.container.hash: 4f3e53d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c28d5ea3a9e66ac26df364685e0bd66b9f25e3e499b575db8a99c538a12d5364,PodSandboxId:5e91860d1941517ecc1424957032ddb90c4169e3822b66cf3f36ee4d80813a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704310317284963763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc39430cce393fdab624e5093adf15c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 447693cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967a597033a853b4b9ff75cf92ecad5d206f029c1ac02a56960c738d24762501,PodSandboxId:9acd93d01c36fa6bef9a0596a0ae506dd6ecfc877013f618f8f0e7649f4419e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704310317160248079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de4242735fdb53c42fed3daf21e4e5e,},Annotations:map[string]string{io.kubernetes.container.hash
: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df75eb16dcb8a9e2e5d44f447f0342fcd9ba6153fd65f4112d1f8a1289ff8acb,PodSandboxId:70282a6d5ef37c891300e963d0ed7d6ee141c9f1151d5feaaae8f33e17b14676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704310316909921721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adb5a2561f637a585e38e2b73f2b809,},Annotations:map[string]string{io.kubernetes.container.hash: 7933f556,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41623edda2ce3e7537e196960a2f155a1fc818441f7771986017b7a1c44dc2da,PodSandboxId:553f7805016660bc01f8975aa885ab92e997c47c6b30361ccb34d2f92f711b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704310316823539382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-484895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 091c426717be69d480bcc59d28e953ce,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0aa27c39-b6f7-44fc-8e50-b77a333d0867 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	44918517a339e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   0b585a6341f70       storage-provisioner
	b9fdc96a0c723       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   e8e40852ac42f       busybox-5bc68d56bd-xlczw
	18b13897e53f6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   9ad859f5a9bcd       coredns-5dd5756b68-wzsqb
	9ed956dc2bced       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   845a2f085df74       kindnet-gqgk2
	d05c7c70d79a4       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   5da6d4fd859b6       kube-proxy-tp9s2
	2a0425b169183       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   0b585a6341f70       storage-provisioner
	c28d5ea3a9e66       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   5e91860d19415       etcd-multinode-484895
	967a597033a85       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   9acd93d01c36f       kube-scheduler-multinode-484895
	df75eb16dcb8a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   70282a6d5ef37       kube-apiserver-multinode-484895
	41623edda2ce3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   553f780501666       kube-controller-manager-multinode-484895
	
	
	==> coredns [18b13897e53f6572afe101cf4028e71b0fc165dd98e6c05e26373a710f37fe36] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57256 - 39660 "HINFO IN 1179720369892617257.7967859621950079037. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020238131s
	
	
	==> describe nodes <==
	Name:               multinode-484895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=multinode-484895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T19_21_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:21:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-484895
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:35:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:32:32 +0000   Wed, 03 Jan 2024 19:21:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:32:32 +0000   Wed, 03 Jan 2024 19:21:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:32:32 +0000   Wed, 03 Jan 2024 19:21:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:32:32 +0000   Wed, 03 Jan 2024 19:32:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    multinode-484895
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5c89e44ca554cc2a8a70afbb74e5669
	  System UUID:                e5c89e44-ca55-4cc2-a8a7-0afbb74e5669
	  Boot ID:                    0d7adbfc-68fa-44fb-851e-f86d73a4c0c8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xlczw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-wzsqb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-484895                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-gqgk2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-484895             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-484895    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-tp9s2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-484895             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-484895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-484895 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-484895 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-484895 event: Registered Node multinode-484895 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-484895 status is now: NodeReady
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node multinode-484895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node multinode-484895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node multinode-484895 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m30s                  node-controller  Node multinode-484895 event: Registered Node multinode-484895 in Controller
	
	
	Name:               multinode-484895-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484895-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=multinode-484895
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_03T19_35_39_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:33:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-484895-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:35:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:33:57 +0000   Wed, 03 Jan 2024 19:33:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:33:57 +0000   Wed, 03 Jan 2024 19:33:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:33:57 +0000   Wed, 03 Jan 2024 19:33:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:33:57 +0000   Wed, 03 Jan 2024 19:33:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    multinode-484895-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e24fb0b823cc45c0b97314958a26978c
	  System UUID:                e24fb0b8-23cc-45c0-b973-14958a26978c
	  Boot ID:                    220da2d3-eb9e-489d-9d50-c1c84cedcbb3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-7hhkv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-lfkpk               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-k7jnm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 104s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-484895-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-484895-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-484895-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-484895-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m57s                  kubelet     Node multinode-484895-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m12s (x2 over 3m12s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 107s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  107s (x2 over 107s)    kubelet     Node multinode-484895-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    107s (x2 over 107s)    kubelet     Node multinode-484895-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s (x2 over 107s)    kubelet     Node multinode-484895-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  107s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                107s                   kubelet     Node multinode-484895-m02 status is now: NodeReady
	
	
	Name:               multinode-484895-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484895-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=multinode-484895
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_03T19_35_39_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:35:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-484895-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:35:39 +0000   Wed, 03 Jan 2024 19:35:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:35:39 +0000   Wed, 03 Jan 2024 19:35:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:35:39 +0000   Wed, 03 Jan 2024 19:35:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:35:39 +0000   Wed, 03 Jan 2024 19:35:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    multinode-484895-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d48dde730a45406ea5c66395adf40bcc
	  System UUID:                d48dde73-0a45-406e-a5c6-6395adf40bcc
	  Boot ID:                    dc526393-e308-4fb0-a6b8-edad573a9a56
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-cgps8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-zt7zf               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-strp6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-484895-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-484895-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-484895-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-484895-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                11m                kubelet     Node multinode-484895-m03 status is now: NodeReady
	  Normal   NodeNotReady             65s                kubelet     Node multinode-484895-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        38s (x2 over 98s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeHasSufficientMemory  6s (x5 over 11m)   kubelet     Node multinode-484895-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6s (x5 over 11m)   kubelet     Node multinode-484895-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6s (x5 over 11m)   kubelet     Node multinode-484895-m03 status is now: NodeHasSufficientPID
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-484895-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-484895-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-484895-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-484895-m03 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	[Jan 3 19:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062156] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.281775] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.728680] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133074] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.375692] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000039] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.437029] systemd-fstab-generator[634]: Ignoring "noauto" for root device
	[  +0.104228] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.147842] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.107903] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.213117] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[ +16.703720] systemd-fstab-generator[904]: Ignoring "noauto" for root device
	[Jan 3 19:32] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [c28d5ea3a9e66ac26df364685e0bd66b9f25e3e499b575db8a99c538a12d5364] <==
	{"level":"info","ts":"2024-01-03T19:31:58.836917Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T19:31:58.836924Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T19:31:58.838129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 switched to configuration voters=(17445412273030399442)"}
	{"level":"info","ts":"2024-01-03T19:31:58.83821Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"78cc5c67b96828b5","local-member-id":"f21a8e08563785d2","added-peer-id":"f21a8e08563785d2","added-peer-peer-urls":["https://192.168.39.191:2380"]}
	{"level":"info","ts":"2024-01-03T19:31:58.838327Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"78cc5c67b96828b5","local-member-id":"f21a8e08563785d2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:31:58.838354Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:31:58.843527Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-03T19:31:58.843717Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f21a8e08563785d2","initial-advertise-peer-urls":["https://192.168.39.191:2380"],"listen-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.191:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-03T19:31:58.843744Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-03T19:31:58.843838Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-01-03T19:31:58.843844Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2024-01-03T19:32:00.593089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-03T19:32:00.593199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-03T19:32:00.593261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 received MsgPreVoteResp from f21a8e08563785d2 at term 2"}
	{"level":"info","ts":"2024-01-03T19:32:00.593303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became candidate at term 3"}
	{"level":"info","ts":"2024-01-03T19:32:00.593327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 received MsgVoteResp from f21a8e08563785d2 at term 3"}
	{"level":"info","ts":"2024-01-03T19:32:00.593359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f21a8e08563785d2 became leader at term 3"}
	{"level":"info","ts":"2024-01-03T19:32:00.593384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f21a8e08563785d2 elected leader f21a8e08563785d2 at term 3"}
	{"level":"info","ts":"2024-01-03T19:32:00.595496Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:32:00.595446Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f21a8e08563785d2","local-member-attributes":"{Name:multinode-484895 ClientURLs:[https://192.168.39.191:2379]}","request-path":"/0/members/f21a8e08563785d2/attributes","cluster-id":"78cc5c67b96828b5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T19:32:00.596323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:32:00.596737Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T19:32:00.597286Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.191:2379"}
	{"level":"info","ts":"2024-01-03T19:32:00.597411Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T19:32:00.597445Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:35:44 up 4 min,  0 users,  load average: 0.25, 0.17, 0.08
	Linux multinode-484895 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [9ed956dc2bced590a0d5949b22d3c5df45fa86cf9af5ad61820e3f3f2a19166d] <==
	I0103 19:34:57.616448       1 main.go:250] Node multinode-484895-m03 has CIDR [10.244.3.0/24] 
	I0103 19:35:07.623515       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:35:07.623569       1 main.go:227] handling current node
	I0103 19:35:07.623601       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0103 19:35:07.623608       1 main.go:250] Node multinode-484895-m02 has CIDR [10.244.1.0/24] 
	I0103 19:35:07.623741       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0103 19:35:07.623767       1 main.go:250] Node multinode-484895-m03 has CIDR [10.244.3.0/24] 
	I0103 19:35:17.638130       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:35:17.638287       1 main.go:227] handling current node
	I0103 19:35:17.638315       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0103 19:35:17.638348       1 main.go:250] Node multinode-484895-m02 has CIDR [10.244.1.0/24] 
	I0103 19:35:17.638477       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0103 19:35:17.638529       1 main.go:250] Node multinode-484895-m03 has CIDR [10.244.3.0/24] 
	I0103 19:35:27.653935       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:35:27.655649       1 main.go:227] handling current node
	I0103 19:35:27.655768       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0103 19:35:27.656395       1 main.go:250] Node multinode-484895-m02 has CIDR [10.244.1.0/24] 
	I0103 19:35:27.656704       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0103 19:35:27.656743       1 main.go:250] Node multinode-484895-m03 has CIDR [10.244.3.0/24] 
	I0103 19:35:37.662381       1 main.go:223] Handling node with IPs: map[192.168.39.191:{}]
	I0103 19:35:37.662430       1 main.go:227] handling current node
	I0103 19:35:37.662445       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0103 19:35:37.662451       1 main.go:250] Node multinode-484895-m02 has CIDR [10.244.1.0/24] 
	I0103 19:35:37.662561       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0103 19:35:37.662589       1 main.go:250] Node multinode-484895-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [df75eb16dcb8a9e2e5d44f447f0342fcd9ba6153fd65f4112d1f8a1289ff8acb] <==
	I0103 19:32:01.946668       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 19:32:02.015395       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0103 19:32:02.015686       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0103 19:32:02.055175       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0103 19:32:02.055280       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0103 19:32:02.056532       1 shared_informer.go:318] Caches are synced for configmaps
	I0103 19:32:02.056929       1 aggregator.go:166] initial CRD sync complete...
	I0103 19:32:02.057053       1 autoregister_controller.go:141] Starting autoregister controller
	I0103 19:32:02.057081       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0103 19:32:02.065522       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 19:32:02.074090       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0103 19:32:02.153344       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 19:32:02.156080       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 19:32:02.158424       1 cache.go:39] Caches are synced for autoregister controller
	I0103 19:32:02.158703       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0103 19:32:02.158789       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0103 19:32:02.196981       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0103 19:32:02.972720       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0103 19:32:04.979112       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0103 19:32:05.127096       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0103 19:32:05.137327       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0103 19:32:05.213086       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 19:32:05.220728       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0103 19:32:14.474766       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0103 19:32:14.488888       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [41623edda2ce3e7537e196960a2f155a1fc818441f7771986017b7a1c44dc2da] <==
	I0103 19:33:57.712260       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484895-m03"
	I0103 19:33:57.712373       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-lmcnh" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-lmcnh"
	I0103 19:33:57.726851       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-484895-m02" podCIDRs=["10.244.1.0/24"]
	I0103 19:33:57.837277       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484895-m02"
	I0103 19:33:58.617694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.016µs"
	I0103 19:33:59.053711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.489776ms"
	I0103 19:33:59.053797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="36.874µs"
	I0103 19:34:11.880895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="69.837µs"
	I0103 19:34:12.467201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.775µs"
	I0103 19:34:12.470962       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="77.261µs"
	I0103 19:34:39.708736       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484895-m02"
	I0103 19:35:35.464577       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-7hhkv"
	I0103 19:35:35.477420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.107526ms"
	I0103 19:35:35.501826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="24.295143ms"
	I0103 19:35:35.501941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.957µs"
	I0103 19:35:36.711848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.645875ms"
	I0103 19:35:36.713586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="103.461µs"
	I0103 19:35:38.475345       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484895-m02"
	I0103 19:35:38.640406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="99.501µs"
	I0103 19:35:39.297108       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-484895-m03\" does not exist"
	I0103 19:35:39.298192       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-cgps8" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-cgps8"
	I0103 19:35:39.298253       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484895-m02"
	I0103 19:35:39.328176       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-484895-m03" podCIDRs=["10.244.2.0/24"]
	I0103 19:35:39.447522       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484895-m02"
	I0103 19:35:40.202544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="77.394µs"
	
	
	==> kube-proxy [d05c7c70d79a485a51b4e4c882099feec53a23ad345af078adec8a89a02a1d01] <==
	I0103 19:32:04.004570       1 server_others.go:69] "Using iptables proxy"
	I0103 19:32:04.020762       1 node.go:141] Successfully retrieved node IP: 192.168.39.191
	I0103 19:32:04.111124       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0103 19:32:04.111200       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 19:32:04.113531       1 server_others.go:152] "Using iptables Proxier"
	I0103 19:32:04.113604       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 19:32:04.113751       1 server.go:846] "Version info" version="v1.28.4"
	I0103 19:32:04.113878       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:32:04.114552       1 config.go:188] "Starting service config controller"
	I0103 19:32:04.114603       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 19:32:04.114700       1 config.go:97] "Starting endpoint slice config controller"
	I0103 19:32:04.114738       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 19:32:04.115270       1 config.go:315] "Starting node config controller"
	I0103 19:32:04.115307       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 19:32:04.217715       1 shared_informer.go:318] Caches are synced for service config
	I0103 19:32:04.217782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 19:32:04.218098       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [967a597033a853b4b9ff75cf92ecad5d206f029c1ac02a56960c738d24762501] <==
	I0103 19:31:59.417277       1 serving.go:348] Generated self-signed cert in-memory
	W0103 19:32:02.024608       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 19:32:02.024765       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 19:32:02.024782       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 19:32:02.024860       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 19:32:02.079146       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0103 19:32:02.079199       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:32:02.080844       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 19:32:02.080977       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 19:32:02.082707       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 19:32:02.081381       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 19:32:02.184337       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 19:31:28 UTC, ends at Wed 2024-01-03 19:35:44 UTC. --
	Jan 03 19:32:04 multinode-484895 kubelet[910]: E0103 19:32:04.469735     910 projected.go:198] Error preparing data for projected volume kube-api-access-mnjfk for pod default/busybox-5bc68d56bd-xlczw: object "default"/"kube-root-ca.crt" not registered
	Jan 03 19:32:04 multinode-484895 kubelet[910]: E0103 19:32:04.469789     910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442f70d7-17de-4ec1-99e0-f13f530e2d0f-kube-api-access-mnjfk podName:442f70d7-17de-4ec1-99e0-f13f530e2d0f nodeName:}" failed. No retries permitted until 2024-01-03 19:32:06.469774484 +0000 UTC m=+10.909955710 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mnjfk" (UniqueName: "kubernetes.io/projected/442f70d7-17de-4ec1-99e0-f13f530e2d0f-kube-api-access-mnjfk") pod "busybox-5bc68d56bd-xlczw" (UID: "442f70d7-17de-4ec1-99e0-f13f530e2d0f") : object "default"/"kube-root-ca.crt" not registered
	Jan 03 19:32:04 multinode-484895 kubelet[910]: E0103 19:32:04.817881     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-xlczw" podUID="442f70d7-17de-4ec1-99e0-f13f530e2d0f"
	Jan 03 19:32:04 multinode-484895 kubelet[910]: E0103 19:32:04.818180     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-wzsqb" podUID="9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa"
	Jan 03 19:32:06 multinode-484895 kubelet[910]: E0103 19:32:06.383723     910 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 03 19:32:06 multinode-484895 kubelet[910]: E0103 19:32:06.383787     910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa-config-volume podName:9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa nodeName:}" failed. No retries permitted until 2024-01-03 19:32:10.383770974 +0000 UTC m=+14.823952207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa-config-volume") pod "coredns-5dd5756b68-wzsqb" (UID: "9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa") : object "kube-system"/"coredns" not registered
	Jan 03 19:32:06 multinode-484895 kubelet[910]: E0103 19:32:06.483914     910 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 03 19:32:06 multinode-484895 kubelet[910]: E0103 19:32:06.483943     910 projected.go:198] Error preparing data for projected volume kube-api-access-mnjfk for pod default/busybox-5bc68d56bd-xlczw: object "default"/"kube-root-ca.crt" not registered
	Jan 03 19:32:06 multinode-484895 kubelet[910]: E0103 19:32:06.484043     910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442f70d7-17de-4ec1-99e0-f13f530e2d0f-kube-api-access-mnjfk podName:442f70d7-17de-4ec1-99e0-f13f530e2d0f nodeName:}" failed. No retries permitted until 2024-01-03 19:32:10.483976509 +0000 UTC m=+14.924157742 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-mnjfk" (UniqueName: "kubernetes.io/projected/442f70d7-17de-4ec1-99e0-f13f530e2d0f-kube-api-access-mnjfk") pod "busybox-5bc68d56bd-xlczw" (UID: "442f70d7-17de-4ec1-99e0-f13f530e2d0f") : object "default"/"kube-root-ca.crt" not registered
	Jan 03 19:32:06 multinode-484895 kubelet[910]: E0103 19:32:06.817937     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-xlczw" podUID="442f70d7-17de-4ec1-99e0-f13f530e2d0f"
	Jan 03 19:32:06 multinode-484895 kubelet[910]: E0103 19:32:06.818146     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-wzsqb" podUID="9e8dd1fe-7476-40d5-8cb7-562fcdc5deaa"
	Jan 03 19:32:07 multinode-484895 kubelet[910]: I0103 19:32:07.506672     910 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 03 19:32:33 multinode-484895 kubelet[910]: I0103 19:32:33.995527     910 scope.go:117] "RemoveContainer" containerID="2a0425b1691836f1298fad30060609c3806e04aabc02997cc20e893e1e3cb72e"
	Jan 03 19:32:55 multinode-484895 kubelet[910]: E0103 19:32:55.837130     910 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 19:32:55 multinode-484895 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 19:32:55 multinode-484895 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 19:32:55 multinode-484895 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 19:33:55 multinode-484895 kubelet[910]: E0103 19:33:55.837649     910 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 19:33:55 multinode-484895 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 19:33:55 multinode-484895 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 19:33:55 multinode-484895 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 19:34:55 multinode-484895 kubelet[910]: E0103 19:34:55.843193     910 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 19:34:55 multinode-484895 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 19:34:55 multinode-484895 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 19:34:55 multinode-484895 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-484895 -n multinode-484895
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-484895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (687.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 stop
E0103 19:35:48.654111   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:35:55.308189   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-484895 stop: exit status 82 (2m1.299208475s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-484895"  ...
	* Stopping node "multinode-484895"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-484895 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-484895 status: exit status 3 (18.82776681s)

                                                
                                                
-- stdout --
	multinode-484895
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-484895-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 19:38:07.046846   35798 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.191:22: connect: no route to host
	E0103 19:38:07.046879   35798 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.191:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-484895 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-484895 -n multinode-484895
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-484895 -n multinode-484895: exit status 3 (3.19597353s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 19:38:10.406835   35891 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.191:22: connect: no route to host
	E0103 19:38:10.406857   35891 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.191:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-484895" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.32s)

                                                
                                    
x
+
TestPreload (277.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-902716 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-902716 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.515484054s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-902716 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-902716 image pull gcr.io/k8s-minikube/busybox: (2.614564189s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-902716
E0103 19:48:51.704413   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:49:07.103689   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-902716: exit status 82 (2m1.293215109s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-902716"  ...
	* Stopping node "test-preload-902716"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-902716 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2024-01-03 19:50:46.156656216 +0000 UTC m=+3211.629233215
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-902716 -n test-preload-902716
E0103 19:50:48.653611   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:50:55.307683   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-902716 -n test-preload-902716: exit status 3 (18.622773075s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 19:51:04.774935   38871 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	E0103 19:51:04.774969   38871 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-902716" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-902716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-902716
--- FAIL: TestPreload (277.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (164.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3137869070.exe start -p running-upgrade-886842 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3137869070.exe start -p running-upgrade-886842 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m26.173703594s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-886842 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-886842 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (14.817733938s)

                                                
                                                
-- stdout --
	* [running-upgrade-886842] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-886842 in cluster running-upgrade-886842
	* Updating the running kvm2 "running-upgrade-886842" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:56:40.837616   45175 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:56:40.837760   45175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:56:40.837773   45175 out.go:309] Setting ErrFile to fd 2...
	I0103 19:56:40.837780   45175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:56:40.837991   45175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:56:40.838514   45175 out.go:303] Setting JSON to false
	I0103 19:56:40.839467   45175 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5948,"bootTime":1704305853,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:56:40.839533   45175 start.go:138] virtualization: kvm guest
	I0103 19:56:40.841629   45175 out.go:177] * [running-upgrade-886842] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:56:40.843449   45175 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:56:40.843527   45175 notify.go:220] Checking for updates...
	I0103 19:56:40.844646   45175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:56:40.846367   45175 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:56:40.847830   45175 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:56:40.849160   45175 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:56:40.850500   45175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:56:40.852361   45175 config.go:182] Loaded profile config "running-upgrade-886842": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0103 19:56:40.852391   45175 start_flags.go:694] config upgrade: Driver=kvm2
	I0103 19:56:40.852404   45175 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 19:56:40.852513   45175 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/running-upgrade-886842/config.json ...
	I0103 19:56:40.853212   45175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:56:40.853276   45175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:56:40.867911   45175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I0103 19:56:40.868311   45175 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:56:40.868960   45175 main.go:141] libmachine: Using API Version  1
	I0103 19:56:40.868994   45175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:56:40.869466   45175 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:56:40.869681   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .DriverName
	I0103 19:56:40.871938   45175 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0103 19:56:40.873701   45175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:56:40.874011   45175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:56:40.874051   45175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:56:40.889182   45175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I0103 19:56:40.889580   45175 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:56:40.890025   45175 main.go:141] libmachine: Using API Version  1
	I0103 19:56:40.890047   45175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:56:40.890338   45175 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:56:40.890561   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .DriverName
	I0103 19:56:40.929500   45175 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 19:56:40.931142   45175 start.go:298] selected driver: kvm2
	I0103 19:56:40.931163   45175 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-886842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.67 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 19:56:40.931266   45175 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:56:40.932072   45175 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:40.932150   45175 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 19:56:40.951834   45175 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 19:56:40.952591   45175 cni.go:84] Creating CNI manager for ""
	I0103 19:56:40.952618   45175 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0103 19:56:40.952631   45175 start_flags.go:323] config:
	{Name:running-upgrade-886842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.67 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 19:56:40.953209   45175 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:40.955855   45175 out.go:177] * Starting control plane node running-upgrade-886842 in cluster running-upgrade-886842
	I0103 19:56:40.957296   45175 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0103 19:56:41.350806   45175 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0103 19:56:41.350979   45175 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/running-upgrade-886842/config.json ...
	I0103 19:56:41.351113   45175 cache.go:107] acquiring lock: {Name:mk372d2259ddc4c784d2a14a7416ba9b749d6f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:41.351163   45175 cache.go:107] acquiring lock: {Name:mka00827c5b12b2cb7982a6962a00d5788af2b03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:41.351170   45175 cache.go:107] acquiring lock: {Name:mkadb8f143a7d487ec74c1161d64101af38d973e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:41.351224   45175 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 19:56:41.351238   45175 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 141.145µs
	I0103 19:56:41.351625   45175 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 19:56:41.351530   45175 start.go:365] acquiring machines lock for running-upgrade-886842: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:56:41.351743   45175 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0103 19:56:41.351776   45175 cache.go:107] acquiring lock: {Name:mkd352e58ea2a8f1e36c9454bc8869766b95364a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:41.351941   45175 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0103 19:56:41.351128   45175 cache.go:107] acquiring lock: {Name:mk1f16a06f8910e41cdd17b70f361dce514c5fd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:41.352041   45175 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:56:41.352081   45175 cache.go:107] acquiring lock: {Name:mkb63c5d776ed15943c7e886132640431c979666 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:41.352301   45175 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0103 19:56:41.352399   45175 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0103 19:56:41.352294   45175 cache.go:107] acquiring lock: {Name:mkbcaae0f7a1a9b4f04dec54951ac3339c95f483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:41.352579   45175 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0103 19:56:41.352667   45175 cache.go:107] acquiring lock: {Name:mk0101dd3a095bb948789a5f6d17fbc8e6b0c48f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:56:41.352762   45175 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0103 19:56:41.353031   45175 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0103 19:56:41.353268   45175 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0103 19:56:41.353556   45175 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:56:41.353633   45175 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0103 19:56:41.354496   45175 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0103 19:56:41.354582   45175 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0103 19:56:41.354609   45175 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0103 19:56:41.617865   45175 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0103 19:56:41.634713   45175 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0103 19:56:41.641429   45175 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0103 19:56:41.646785   45175 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0103 19:56:41.655808   45175 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0103 19:56:41.662764   45175 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0103 19:56:41.681668   45175 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0103 19:56:41.791126   45175 cache.go:157] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0103 19:56:41.791150   45175 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 439.926559ms
	I0103 19:56:41.791161   45175 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0103 19:56:42.284402   45175 cache.go:157] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0103 19:56:42.284426   45175 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 931.764749ms
	I0103 19:56:42.284439   45175 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0103 19:56:42.620724   45175 cache.go:157] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0103 19:56:42.620753   45175 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.268718077s
	I0103 19:56:42.620769   45175 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0103 19:56:42.841832   45175 cache.go:157] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0103 19:56:42.841858   45175 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.490697131s
	I0103 19:56:42.841869   45175 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0103 19:56:43.053674   45175 cache.go:157] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0103 19:56:43.053698   45175 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.701950576s
	I0103 19:56:43.053712   45175 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0103 19:56:43.513833   45175 cache.go:157] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0103 19:56:43.513858   45175 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.16269929s
	I0103 19:56:43.513870   45175 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0103 19:56:43.579422   45175 cache.go:157] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0103 19:56:43.579447   45175 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.228329292s
	I0103 19:56:43.579458   45175 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0103 19:56:43.579479   45175 cache.go:87] Successfully saved all images to host disk.
	I0103 19:56:51.771207   45175 start.go:369] acquired machines lock for "running-upgrade-886842" in 10.419550638s
	I0103 19:56:51.771264   45175 start.go:96] Skipping create...Using existing machine configuration
	I0103 19:56:51.771272   45175 fix.go:54] fixHost starting: minikube
	I0103 19:56:51.771691   45175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:56:51.771730   45175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:56:51.788380   45175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0103 19:56:51.788848   45175 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:56:51.789356   45175 main.go:141] libmachine: Using API Version  1
	I0103 19:56:51.789379   45175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:56:51.789740   45175 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:56:51.789906   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .DriverName
	I0103 19:56:51.790068   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetState
	I0103 19:56:51.791947   45175 fix.go:102] recreateIfNeeded on running-upgrade-886842: state=Running err=<nil>
	W0103 19:56:51.791976   45175 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 19:56:51.793682   45175 out.go:177] * Updating the running kvm2 "running-upgrade-886842" VM ...
	I0103 19:56:51.795202   45175 machine.go:88] provisioning docker machine ...
	I0103 19:56:51.795234   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .DriverName
	I0103 19:56:51.795434   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetMachineName
	I0103 19:56:51.795906   45175 buildroot.go:166] provisioning hostname "running-upgrade-886842"
	I0103 19:56:51.795940   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetMachineName
	I0103 19:56:51.796086   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHHostname
	I0103 19:56:51.798406   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:51.798845   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:51.798876   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:51.798997   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHPort
	I0103 19:56:51.799189   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:51.799330   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:51.799434   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHUsername
	I0103 19:56:51.799614   45175 main.go:141] libmachine: Using SSH client type: native
	I0103 19:56:51.799949   45175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.67 22 <nil> <nil>}
	I0103 19:56:51.799965   45175 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-886842 && echo "running-upgrade-886842" | sudo tee /etc/hostname
	I0103 19:56:51.934069   45175 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-886842
	
	I0103 19:56:51.934109   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHHostname
	I0103 19:56:51.936847   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:51.937218   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:51.937254   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:51.937384   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHPort
	I0103 19:56:51.937562   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:51.937743   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:51.937908   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHUsername
	I0103 19:56:51.938072   45175 main.go:141] libmachine: Using SSH client type: native
	I0103 19:56:51.938556   45175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.67 22 <nil> <nil>}
	I0103 19:56:51.938588   45175 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-886842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-886842/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-886842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:56:52.050493   45175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:56:52.050539   45175 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 19:56:52.050575   45175 buildroot.go:174] setting up certificates
	I0103 19:56:52.050591   45175 provision.go:83] configureAuth start
	I0103 19:56:52.050609   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetMachineName
	I0103 19:56:52.050886   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetIP
	I0103 19:56:52.053503   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:52.053858   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:52.053886   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:52.054013   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHHostname
	I0103 19:56:52.055947   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:52.056233   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:52.056255   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:52.056383   45175 provision.go:138] copyHostCerts
	I0103 19:56:52.056451   45175 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 19:56:52.056460   45175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:56:52.056510   45175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 19:56:52.056630   45175 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 19:56:52.056648   45175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:56:52.056669   45175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 19:56:52.056749   45175 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 19:56:52.056757   45175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:56:52.056775   45175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 19:56:52.056817   45175 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-886842 san=[192.168.50.67 192.168.50.67 localhost 127.0.0.1 minikube running-upgrade-886842]
	I0103 19:56:52.262286   45175 provision.go:172] copyRemoteCerts
	I0103 19:56:52.262345   45175 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:56:52.262372   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHHostname
	I0103 19:56:52.264866   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:52.265247   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:52.265278   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:52.265456   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHPort
	I0103 19:56:52.265654   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:52.265805   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHUsername
	I0103 19:56:52.265985   45175 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/running-upgrade-886842/id_rsa Username:docker}
	I0103 19:56:52.355067   45175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 19:56:52.368720   45175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 19:56:52.384959   45175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:56:52.418843   45175 provision.go:86] duration metric: configureAuth took 368.236696ms
	I0103 19:56:52.418870   45175 buildroot.go:189] setting minikube options for container-runtime
	I0103 19:56:52.419103   45175 config.go:182] Loaded profile config "running-upgrade-886842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0103 19:56:52.419236   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHHostname
	I0103 19:56:52.422404   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:52.422906   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:52.422979   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:52.423080   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHPort
	I0103 19:56:52.423307   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:52.423503   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:52.423650   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHUsername
	I0103 19:56:52.423830   45175 main.go:141] libmachine: Using SSH client type: native
	I0103 19:56:52.424281   45175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.67 22 <nil> <nil>}
	I0103 19:56:52.424305   45175 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:56:53.220964   45175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:56:53.220992   45175 machine.go:91] provisioned docker machine in 1.425774407s
	I0103 19:56:53.221005   45175 start.go:300] post-start starting for "running-upgrade-886842" (driver="kvm2")
	I0103 19:56:53.221017   45175 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:56:53.221048   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .DriverName
	I0103 19:56:53.221349   45175 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:56:53.221379   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHHostname
	I0103 19:56:53.224650   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.225000   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:53.225032   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.225199   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHPort
	I0103 19:56:53.225411   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:53.225597   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHUsername
	I0103 19:56:53.225751   45175 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/running-upgrade-886842/id_rsa Username:docker}
	I0103 19:56:53.309376   45175 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:56:53.314652   45175 info.go:137] Remote host: Buildroot 2019.02.7
	I0103 19:56:53.314699   45175 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 19:56:53.314782   45175 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 19:56:53.314883   45175 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 19:56:53.315009   45175 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:56:53.321641   45175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:56:53.339612   45175 start.go:303] post-start completed in 118.592374ms
	I0103 19:56:53.339648   45175 fix.go:56] fixHost completed within 1.568375059s
	I0103 19:56:53.339678   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHHostname
	I0103 19:56:53.342439   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.342968   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:53.343007   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.343195   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHPort
	I0103 19:56:53.343377   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:53.343567   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:53.343794   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHUsername
	I0103 19:56:53.343974   45175 main.go:141] libmachine: Using SSH client type: native
	I0103 19:56:53.344403   45175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.67 22 <nil> <nil>}
	I0103 19:56:53.344425   45175 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0103 19:56:53.463585   45175 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704311813.459905796
	
	I0103 19:56:53.463610   45175 fix.go:206] guest clock: 1704311813.459905796
	I0103 19:56:53.463619   45175 fix.go:219] Guest: 2024-01-03 19:56:53.459905796 +0000 UTC Remote: 2024-01-03 19:56:53.339659064 +0000 UTC m=+12.569254477 (delta=120.246732ms)
	I0103 19:56:53.463641   45175 fix.go:190] guest clock delta is within tolerance: 120.246732ms
	I0103 19:56:53.463647   45175 start.go:83] releasing machines lock for "running-upgrade-886842", held for 1.692405464s
	I0103 19:56:53.463688   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .DriverName
	I0103 19:56:53.464001   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetIP
	I0103 19:56:53.467235   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.467691   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:53.467719   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.467957   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .DriverName
	I0103 19:56:53.468666   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .DriverName
	I0103 19:56:53.468833   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .DriverName
	I0103 19:56:53.468909   45175 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:56:53.468949   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHHostname
	I0103 19:56:53.469149   45175 ssh_runner.go:195] Run: cat /version.json
	I0103 19:56:53.469171   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHHostname
	I0103 19:56:53.472367   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.472711   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:53.472797   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.473175   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.473175   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHPort
	I0103 19:56:53.473407   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:53.473575   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHUsername
	I0103 19:56:53.473656   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:02:7d", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 20:54:54 +0000 UTC Type:0 Mac:52:54:00:1e:02:7d Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:running-upgrade-886842 Clientid:01:52:54:00:1e:02:7d}
	I0103 19:56:53.473688   45175 main.go:141] libmachine: (running-upgrade-886842) DBG | domain running-upgrade-886842 has defined IP address 192.168.50.67 and MAC address 52:54:00:1e:02:7d in network minikube-net
	I0103 19:56:53.473732   45175 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/running-upgrade-886842/id_rsa Username:docker}
	I0103 19:56:53.473858   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHPort
	I0103 19:56:53.473994   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHKeyPath
	I0103 19:56:53.474122   45175 main.go:141] libmachine: (running-upgrade-886842) Calling .GetSSHUsername
	I0103 19:56:53.474243   45175 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/running-upgrade-886842/id_rsa Username:docker}
	W0103 19:56:53.560798   45175 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0103 19:56:53.560873   45175 ssh_runner.go:195] Run: systemctl --version
	I0103 19:56:53.599314   45175 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:56:53.767881   45175 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 19:56:53.774143   45175 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 19:56:53.774224   45175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:56:53.780236   45175 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0103 19:56:53.780260   45175 start.go:475] detecting cgroup driver to use...
	I0103 19:56:53.780320   45175 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:56:53.792288   45175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:56:53.805197   45175 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:56:53.805267   45175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:56:53.816366   45175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:56:53.828831   45175 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0103 19:56:53.841561   45175 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0103 19:56:53.841630   45175 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:56:54.020948   45175 docker.go:219] disabling docker service ...
	I0103 19:56:54.021025   45175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:56:55.053833   45175 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.03277945s)
	I0103 19:56:55.053909   45175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:56:55.075275   45175 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:56:55.310556   45175 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:56:55.508537   45175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:56:55.526539   45175 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:56:55.558851   45175 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0103 19:56:55.558922   45175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:56:55.571929   45175 out.go:177] 
	W0103 19:56:55.573432   45175 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0103 19:56:55.573451   45175 out.go:239] * 
	* 
	W0103 19:56:55.574488   45175 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 19:56:55.576478   45175 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-886842 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-03 19:56:55.597930018 +0000 UTC m=+3581.070506997
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-886842 -n running-upgrade-886842
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-886842 -n running-upgrade-886842: exit status 4 (299.379606ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 19:56:55.855693   45428 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-886842" does not appear in /home/jenkins/minikube-integration/17885-9609/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-886842" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-886842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-886842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-886842: (1.462223138s)
--- FAIL: TestRunningBinaryUpgrade (164.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (299.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1290875162.exe start -p stopped-upgrade-857735 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1290875162.exe start -p stopped-upgrade-857735 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m12.03782966s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1290875162.exe -p stopped-upgrade-857735 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1290875162.exe -p stopped-upgrade-857735 stop: (1m35.327949136s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-857735 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0103 20:00:48.654119   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:00:55.308019   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-857735 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m12.409187281s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-857735] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-857735 in cluster stopped-upgrade-857735
	* Restarting existing kvm2 VM for "stopped-upgrade-857735" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:00:46.483623   51092 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:00:46.483794   51092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:00:46.483803   51092 out.go:309] Setting ErrFile to fd 2...
	I0103 20:00:46.483810   51092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:00:46.484179   51092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:00:46.484871   51092 out.go:303] Setting JSON to false
	I0103 20:00:46.486005   51092 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6194,"bootTime":1704305853,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 20:00:46.486082   51092 start.go:138] virtualization: kvm guest
	I0103 20:00:46.490598   51092 out.go:177] * [stopped-upgrade-857735] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 20:00:46.492054   51092 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:00:46.492621   51092 notify.go:220] Checking for updates...
	I0103 20:00:46.493591   51092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:00:46.495736   51092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:00:46.497325   51092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:00:46.502765   51092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 20:00:46.504869   51092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:00:46.507411   51092 config.go:182] Loaded profile config "stopped-upgrade-857735": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0103 20:00:46.507442   51092 start_flags.go:694] config upgrade: Driver=kvm2
	I0103 20:00:46.507456   51092 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 20:00:46.507572   51092 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/stopped-upgrade-857735/config.json ...
	I0103 20:00:46.508401   51092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:00:46.508469   51092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:00:46.527657   51092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40681
	I0103 20:00:46.528103   51092 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:00:46.528753   51092 main.go:141] libmachine: Using API Version  1
	I0103 20:00:46.528779   51092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:00:46.529200   51092 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:00:46.529398   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	I0103 20:00:46.531735   51092 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0103 20:00:46.533232   51092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:00:46.533550   51092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:00:46.533591   51092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:00:46.548161   51092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40643
	I0103 20:00:46.548736   51092 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:00:46.549316   51092 main.go:141] libmachine: Using API Version  1
	I0103 20:00:46.549342   51092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:00:46.549886   51092 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:00:46.550110   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	I0103 20:00:46.590002   51092 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 20:00:46.591819   51092 start.go:298] selected driver: kvm2
	I0103 20:00:46.591838   51092 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-857735 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.43 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 20:00:46.591963   51092 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:00:46.593085   51092 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:46.593195   51092 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 20:00:46.609041   51092 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 20:00:46.609442   51092 cni.go:84] Creating CNI manager for ""
	I0103 20:00:46.609464   51092 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0103 20:00:46.609476   51092 start_flags.go:323] config:
	{Name:stopped-upgrade-857735 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.43 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 20:00:46.609689   51092 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:46.611927   51092 out.go:177] * Starting control plane node stopped-upgrade-857735 in cluster stopped-upgrade-857735
	I0103 20:00:46.613671   51092 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0103 20:00:47.006573   51092 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0103 20:00:47.006759   51092 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/stopped-upgrade-857735/config.json ...
	I0103 20:00:47.006940   51092 cache.go:107] acquiring lock: {Name:mk372d2259ddc4c784d2a14a7416ba9b749d6f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:47.007007   51092 cache.go:107] acquiring lock: {Name:mka00827c5b12b2cb7982a6962a00d5788af2b03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:47.007056   51092 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 20:00:47.007068   51092 start.go:365] acquiring machines lock for stopped-upgrade-857735: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:00:47.007081   51092 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0103 20:00:47.007068   51092 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 142.055µs
	I0103 20:00:47.007092   51092 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 20:00:47.007092   51092 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 97.323µs
	I0103 20:00:47.007104   51092 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0103 20:00:47.007108   51092 cache.go:107] acquiring lock: {Name:mkbcaae0f7a1a9b4f04dec54951ac3339c95f483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:47.007119   51092 cache.go:107] acquiring lock: {Name:mkd352e58ea2a8f1e36c9454bc8869766b95364a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:47.006961   51092 cache.go:107] acquiring lock: {Name:mk1f16a06f8910e41cdd17b70f361dce514c5fd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:47.007152   51092 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0103 20:00:47.007161   51092 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 55.371µs
	I0103 20:00:47.007166   51092 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0103 20:00:47.007171   51092 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0103 20:00:47.007175   51092 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 224.686µs
	I0103 20:00:47.007184   51092 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0103 20:00:47.007164   51092 cache.go:107] acquiring lock: {Name:mk0101dd3a095bb948789a5f6d17fbc8e6b0c48f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:47.007190   51092 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0103 20:00:47.007205   51092 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 86.054µs
	I0103 20:00:47.007204   51092 cache.go:107] acquiring lock: {Name:mkadb8f143a7d487ec74c1161d64101af38d973e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:47.007223   51092 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0103 20:00:47.007238   51092 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0103 20:00:47.007211   51092 cache.go:107] acquiring lock: {Name:mkb63c5d776ed15943c7e886132640431c979666 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:00:47.007247   51092 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 91.504µs
	I0103 20:00:47.007263   51092 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0103 20:00:47.007270   51092 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0103 20:00:47.007272   51092 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 71.685µs
	I0103 20:00:47.007292   51092 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0103 20:00:47.007339   51092 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0103 20:00:47.007351   51092 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 178.861µs
	I0103 20:00:47.007359   51092 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0103 20:00:47.007394   51092 cache.go:87] Successfully saved all images to host disk.
	I0103 20:01:11.179340   51092 start.go:369] acquired machines lock for "stopped-upgrade-857735" in 24.17224778s
	I0103 20:01:11.179395   51092 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:01:11.179407   51092 fix.go:54] fixHost starting: minikube
	I0103 20:01:11.180300   51092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:01:11.180347   51092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:01:11.199801   51092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I0103 20:01:11.200277   51092 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:01:11.200776   51092 main.go:141] libmachine: Using API Version  1
	I0103 20:01:11.200800   51092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:01:11.201190   51092 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:01:11.201406   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	I0103 20:01:11.201569   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetState
	I0103 20:01:11.203332   51092 fix.go:102] recreateIfNeeded on stopped-upgrade-857735: state=Stopped err=<nil>
	I0103 20:01:11.203370   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	W0103 20:01:11.203551   51092 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:01:11.206021   51092 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-857735" ...
	I0103 20:01:11.207589   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .Start
	I0103 20:01:11.207812   51092 main.go:141] libmachine: (stopped-upgrade-857735) Ensuring networks are active...
	I0103 20:01:11.208692   51092 main.go:141] libmachine: (stopped-upgrade-857735) Ensuring network default is active
	I0103 20:01:11.209094   51092 main.go:141] libmachine: (stopped-upgrade-857735) Ensuring network minikube-net is active
	I0103 20:01:11.209529   51092 main.go:141] libmachine: (stopped-upgrade-857735) Getting domain xml...
	I0103 20:01:11.210372   51092 main.go:141] libmachine: (stopped-upgrade-857735) Creating domain...
	I0103 20:01:12.844993   51092 main.go:141] libmachine: (stopped-upgrade-857735) Waiting to get IP...
	I0103 20:01:12.846050   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:12.846662   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:12.846741   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:12.846641   51277 retry.go:31] will retry after 257.242544ms: waiting for machine to come up
	I0103 20:01:13.105236   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:13.105852   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:13.105887   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:13.105791   51277 retry.go:31] will retry after 380.518306ms: waiting for machine to come up
	I0103 20:01:13.487920   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:13.488609   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:13.488635   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:13.488502   51277 retry.go:31] will retry after 438.300072ms: waiting for machine to come up
	I0103 20:01:13.928121   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:13.928821   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:13.928845   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:13.928726   51277 retry.go:31] will retry after 527.632601ms: waiting for machine to come up
	I0103 20:01:14.458590   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:14.459235   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:14.459269   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:14.459154   51277 retry.go:31] will retry after 758.825624ms: waiting for machine to come up
	I0103 20:01:15.219184   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:15.219845   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:15.219872   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:15.219755   51277 retry.go:31] will retry after 696.128896ms: waiting for machine to come up
	I0103 20:01:15.917276   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:15.917765   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:15.917792   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:15.917743   51277 retry.go:31] will retry after 869.030972ms: waiting for machine to come up
	I0103 20:01:16.788876   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:16.789469   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:16.789493   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:16.789432   51277 retry.go:31] will retry after 1.47507127s: waiting for machine to come up
	I0103 20:01:18.265783   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:18.266316   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:18.266347   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:18.266207   51277 retry.go:31] will retry after 1.618503987s: waiting for machine to come up
	I0103 20:01:19.886677   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:19.887291   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:19.887317   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:19.887242   51277 retry.go:31] will retry after 1.639581274s: waiting for machine to come up
	I0103 20:01:21.528251   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:21.528765   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:21.528800   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:21.528707   51277 retry.go:31] will retry after 1.91051871s: waiting for machine to come up
	I0103 20:01:23.441169   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:23.441754   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:23.441783   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:23.441703   51277 retry.go:31] will retry after 2.379659418s: waiting for machine to come up
	I0103 20:01:25.823829   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:25.824239   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:25.824267   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:25.824211   51277 retry.go:31] will retry after 3.030527847s: waiting for machine to come up
	I0103 20:01:28.856097   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:28.856668   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:28.856696   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:28.856627   51277 retry.go:31] will retry after 5.265530034s: waiting for machine to come up
	I0103 20:01:34.124099   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:34.124631   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:34.124684   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:34.124559   51277 retry.go:31] will retry after 5.929806431s: waiting for machine to come up
	I0103 20:01:40.056482   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:40.057039   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:40.057071   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:40.056980   51277 retry.go:31] will retry after 5.963805451s: waiting for machine to come up
	I0103 20:01:46.024007   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:46.024679   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | unable to find current IP address of domain stopped-upgrade-857735 in network minikube-net
	I0103 20:01:46.024714   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | I0103 20:01:46.024639   51277 retry.go:31] will retry after 10.029702003s: waiting for machine to come up
	I0103 20:01:56.055845   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.056421   51092 main.go:141] libmachine: (stopped-upgrade-857735) Found IP for machine: 192.168.50.43
	I0103 20:01:56.056450   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has current primary IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.056461   51092 main.go:141] libmachine: (stopped-upgrade-857735) Reserving static IP address...
	I0103 20:01:56.056875   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "stopped-upgrade-857735", mac: "52:54:00:77:db:83", ip: "192.168.50.43"} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:56.056914   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-857735", mac: "52:54:00:77:db:83", ip: "192.168.50.43"}
	I0103 20:01:56.056933   51092 main.go:141] libmachine: (stopped-upgrade-857735) Reserved static IP address: 192.168.50.43
	I0103 20:01:56.056952   51092 main.go:141] libmachine: (stopped-upgrade-857735) Waiting for SSH to be available...
	I0103 20:01:56.056965   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | Getting to WaitForSSH function...
	I0103 20:01:56.059813   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.060178   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:56.060216   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.060354   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | Using SSH client type: external
	I0103 20:01:56.060383   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/stopped-upgrade-857735/id_rsa (-rw-------)
	I0103 20:01:56.060414   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/stopped-upgrade-857735/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:01:56.060430   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | About to run SSH command:
	I0103 20:01:56.060454   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | exit 0
	I0103 20:01:56.194130   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | SSH cmd err, output: <nil>: 
	I0103 20:01:56.194457   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetConfigRaw
	I0103 20:01:56.195094   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetIP
	I0103 20:01:56.197572   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.197976   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:56.198011   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.198268   51092 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/stopped-upgrade-857735/config.json ...
	I0103 20:01:56.198454   51092 machine.go:88] provisioning docker machine ...
	I0103 20:01:56.198472   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	I0103 20:01:56.198695   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetMachineName
	I0103 20:01:56.198862   51092 buildroot.go:166] provisioning hostname "stopped-upgrade-857735"
	I0103 20:01:56.198880   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetMachineName
	I0103 20:01:56.199065   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHHostname
	I0103 20:01:56.201314   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.201698   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:56.201733   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.201884   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHPort
	I0103 20:01:56.202077   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:56.202244   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:56.202377   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHUsername
	I0103 20:01:56.202562   51092 main.go:141] libmachine: Using SSH client type: native
	I0103 20:01:56.203031   51092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.43 22 <nil> <nil>}
	I0103 20:01:56.203055   51092 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-857735 && echo "stopped-upgrade-857735" | sudo tee /etc/hostname
	I0103 20:01:56.326504   51092 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-857735
	
	I0103 20:01:56.326570   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHHostname
	I0103 20:01:56.329814   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.330329   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:56.330359   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.330487   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHPort
	I0103 20:01:56.330683   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:56.330868   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:56.331021   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHUsername
	I0103 20:01:56.331215   51092 main.go:141] libmachine: Using SSH client type: native
	I0103 20:01:56.331522   51092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.43 22 <nil> <nil>}
	I0103 20:01:56.331540   51092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-857735' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-857735/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-857735' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:01:56.452254   51092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:01:56.452287   51092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:01:56.452321   51092 buildroot.go:174] setting up certificates
	I0103 20:01:56.452336   51092 provision.go:83] configureAuth start
	I0103 20:01:56.452350   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetMachineName
	I0103 20:01:56.452649   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetIP
	I0103 20:01:56.455271   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.455612   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:56.455640   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.455829   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHHostname
	I0103 20:01:56.458202   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.458614   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:56.458646   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.458847   51092 provision.go:138] copyHostCerts
	I0103 20:01:56.458910   51092 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:01:56.458929   51092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:01:56.459011   51092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:01:56.459127   51092 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:01:56.459143   51092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:01:56.459180   51092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:01:56.459262   51092 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:01:56.459272   51092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:01:56.459302   51092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:01:56.459390   51092 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-857735 san=[192.168.50.43 192.168.50.43 localhost 127.0.0.1 minikube stopped-upgrade-857735]
	I0103 20:01:56.711331   51092 provision.go:172] copyRemoteCerts
	I0103 20:01:56.711487   51092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:01:56.711558   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHHostname
	I0103 20:01:56.715323   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.715627   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:56.715652   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.715889   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHPort
	I0103 20:01:56.716114   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:56.716317   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHUsername
	I0103 20:01:56.716493   51092 sshutil.go:53] new ssh client: &{IP:192.168.50.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/stopped-upgrade-857735/id_rsa Username:docker}
	I0103 20:01:56.805756   51092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:01:56.821251   51092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:01:56.836617   51092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:01:56.853842   51092 provision.go:86] duration metric: configureAuth took 401.490758ms
	I0103 20:01:56.853873   51092 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:01:56.854060   51092 config.go:182] Loaded profile config "stopped-upgrade-857735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0103 20:01:56.854173   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHHostname
	I0103 20:01:56.857091   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.857345   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:56.857371   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:56.857647   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHPort
	I0103 20:01:56.857857   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:56.858012   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:56.858120   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHUsername
	I0103 20:01:56.858256   51092 main.go:141] libmachine: Using SSH client type: native
	I0103 20:01:56.858776   51092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.43 22 <nil> <nil>}
	I0103 20:01:56.858807   51092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:01:57.962603   51092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:01:57.962628   51092 machine.go:91] provisioned docker machine in 1.764161187s
	I0103 20:01:57.962638   51092 start.go:300] post-start starting for "stopped-upgrade-857735" (driver="kvm2")
	I0103 20:01:57.962647   51092 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:01:57.962666   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	I0103 20:01:57.963043   51092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:01:57.963089   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHHostname
	I0103 20:01:57.965431   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:57.965792   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:57.965823   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:57.965965   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHPort
	I0103 20:01:57.966159   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:57.966323   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHUsername
	I0103 20:01:57.966494   51092 sshutil.go:53] new ssh client: &{IP:192.168.50.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/stopped-upgrade-857735/id_rsa Username:docker}
	I0103 20:01:58.054730   51092 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:01:58.059551   51092 info.go:137] Remote host: Buildroot 2019.02.7
	I0103 20:01:58.059579   51092 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:01:58.059658   51092 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:01:58.059787   51092 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:01:58.059903   51092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:01:58.065961   51092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:01:58.081290   51092 start.go:303] post-start completed in 118.640727ms
	I0103 20:01:58.081311   51092 fix.go:56] fixHost completed within 46.901904896s
	I0103 20:01:58.081335   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHHostname
	I0103 20:01:58.084383   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:58.084892   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:58.084923   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:58.085107   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHPort
	I0103 20:01:58.085306   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:58.085446   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:58.085606   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHUsername
	I0103 20:01:58.085802   51092 main.go:141] libmachine: Using SSH client type: native
	I0103 20:01:58.086174   51092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.43 22 <nil> <nil>}
	I0103 20:01:58.086196   51092 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0103 20:01:58.207145   51092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312118.166519736
	
	I0103 20:01:58.207167   51092 fix.go:206] guest clock: 1704312118.166519736
	I0103 20:01:58.207174   51092 fix.go:219] Guest: 2024-01-03 20:01:58.166519736 +0000 UTC Remote: 2024-01-03 20:01:58.08131502 +0000 UTC m=+71.673847423 (delta=85.204716ms)
	I0103 20:01:58.207212   51092 fix.go:190] guest clock delta is within tolerance: 85.204716ms
	I0103 20:01:58.207222   51092 start.go:83] releasing machines lock for "stopped-upgrade-857735", held for 47.027851331s
	I0103 20:01:58.207250   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	I0103 20:01:58.207538   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetIP
	I0103 20:01:58.210450   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:58.210869   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:58.210894   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:58.211064   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	I0103 20:01:58.211576   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	I0103 20:01:58.211781   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .DriverName
	I0103 20:01:58.211861   51092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:01:58.211895   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHHostname
	I0103 20:01:58.212006   51092 ssh_runner.go:195] Run: cat /version.json
	I0103 20:01:58.212031   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHHostname
	I0103 20:01:58.214965   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:58.215296   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:58.215325   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:58.215345   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:58.215491   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHPort
	I0103 20:01:58.215683   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:58.215798   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:83", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2024-01-03 21:01:41 +0000 UTC Type:0 Mac:52:54:00:77:db:83 Iaid: IPaddr:192.168.50.43 Prefix:24 Hostname:stopped-upgrade-857735 Clientid:01:52:54:00:77:db:83}
	I0103 20:01:58.215828   51092 main.go:141] libmachine: (stopped-upgrade-857735) DBG | domain stopped-upgrade-857735 has defined IP address 192.168.50.43 and MAC address 52:54:00:77:db:83 in network minikube-net
	I0103 20:01:58.215879   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHUsername
	I0103 20:01:58.215992   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHPort
	I0103 20:01:58.216048   51092 sshutil.go:53] new ssh client: &{IP:192.168.50.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/stopped-upgrade-857735/id_rsa Username:docker}
	I0103 20:01:58.216166   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHKeyPath
	I0103 20:01:58.216289   51092 main.go:141] libmachine: (stopped-upgrade-857735) Calling .GetSSHUsername
	I0103 20:01:58.216457   51092 sshutil.go:53] new ssh client: &{IP:192.168.50.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/stopped-upgrade-857735/id_rsa Username:docker}
	W0103 20:01:58.329149   51092 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0103 20:01:58.329221   51092 ssh_runner.go:195] Run: systemctl --version
	I0103 20:01:58.334695   51092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:01:58.409216   51092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:01:58.415393   51092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:01:58.415457   51092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:01:58.420513   51092 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0103 20:01:58.420534   51092 start.go:475] detecting cgroup driver to use...
	I0103 20:01:58.420596   51092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:01:58.431035   51092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:01:58.442176   51092 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:01:58.442240   51092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:01:58.450845   51092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:01:58.459083   51092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0103 20:01:58.467453   51092 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0103 20:01:58.467555   51092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:01:58.564102   51092 docker.go:219] disabling docker service ...
	I0103 20:01:58.564182   51092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:01:58.576308   51092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:01:58.585576   51092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:01:58.667518   51092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:01:58.773080   51092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:01:58.782088   51092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:01:58.794357   51092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0103 20:01:58.794426   51092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:01:58.804012   51092 out.go:177] 
	W0103 20:01:58.805384   51092 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0103 20:01:58.805400   51092 out.go:239] * 
	* 
	W0103 20:01:58.806211   51092 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 20:01:58.807273   51092 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-857735 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (299.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (99.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-705639 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0103 19:59:07.103015   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-705639 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.690025678s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-705639] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-705639 in cluster pause-705639
	* Updating the running kvm2 "pause-705639" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-705639" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:58:26.447585   46928 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:58:26.447696   46928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:58:26.447703   46928 out.go:309] Setting ErrFile to fd 2...
	I0103 19:58:26.447708   46928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:58:26.447921   46928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:58:26.448477   46928 out.go:303] Setting JSON to false
	I0103 19:58:26.449527   46928 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6054,"bootTime":1704305853,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:58:26.449598   46928 start.go:138] virtualization: kvm guest
	I0103 19:58:26.452036   46928 out.go:177] * [pause-705639] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:58:26.453921   46928 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:58:26.453979   46928 notify.go:220] Checking for updates...
	I0103 19:58:26.455542   46928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:58:26.457109   46928 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:58:26.458671   46928 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:58:26.459994   46928 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:58:26.461353   46928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:58:26.463221   46928 config.go:182] Loaded profile config "pause-705639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:58:26.463888   46928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:58:26.463953   46928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:58:26.479087   46928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
	I0103 19:58:26.479566   46928 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:58:26.480253   46928 main.go:141] libmachine: Using API Version  1
	I0103 19:58:26.480275   46928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:58:26.480680   46928 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:58:26.480943   46928 main.go:141] libmachine: (pause-705639) Calling .DriverName
	I0103 19:58:26.481257   46928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:58:26.481621   46928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:58:26.481687   46928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:58:26.497388   46928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0103 19:58:26.497835   46928 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:58:26.498451   46928 main.go:141] libmachine: Using API Version  1
	I0103 19:58:26.498470   46928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:58:26.498894   46928 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:58:26.499132   46928 main.go:141] libmachine: (pause-705639) Calling .DriverName
	I0103 19:58:26.541477   46928 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 19:58:26.542907   46928 start.go:298] selected driver: kvm2
	I0103 19:58:26.542929   46928 start.go:902] validating driver "kvm2" against &{Name:pause-705639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:pause-705639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.234 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:58:26.543138   46928 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:58:26.543658   46928 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:58:26.543752   46928 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 19:58:26.559823   46928 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 19:58:26.560845   46928 cni.go:84] Creating CNI manager for ""
	I0103 19:58:26.560873   46928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 19:58:26.560887   46928 start_flags.go:323] config:
	{Name:pause-705639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-705639 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.234 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:58:26.561138   46928 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:58:26.563033   46928 out.go:177] * Starting control plane node pause-705639 in cluster pause-705639
	I0103 19:58:26.564449   46928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:58:26.564532   46928 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 19:58:26.564547   46928 cache.go:56] Caching tarball of preloaded images
	I0103 19:58:26.564627   46928 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:58:26.564642   46928 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:58:26.564829   46928 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/config.json ...
	I0103 19:58:26.565102   46928 start.go:365] acquiring machines lock for pause-705639: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:59:06.235735   46928 start.go:369] acquired machines lock for "pause-705639" in 39.670593937s
	I0103 19:59:06.235781   46928 start.go:96] Skipping create...Using existing machine configuration
	I0103 19:59:06.235792   46928 fix.go:54] fixHost starting: 
	I0103 19:59:06.236270   46928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:59:06.236323   46928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:59:06.256294   46928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43791
	I0103 19:59:06.256796   46928 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:59:06.257305   46928 main.go:141] libmachine: Using API Version  1
	I0103 19:59:06.257337   46928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:59:06.257681   46928 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:59:06.257873   46928 main.go:141] libmachine: (pause-705639) Calling .DriverName
	I0103 19:59:06.258039   46928 main.go:141] libmachine: (pause-705639) Calling .GetState
	I0103 19:59:06.259865   46928 fix.go:102] recreateIfNeeded on pause-705639: state=Running err=<nil>
	W0103 19:59:06.259889   46928 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 19:59:06.261538   46928 out.go:177] * Updating the running kvm2 "pause-705639" VM ...
	I0103 19:59:06.262868   46928 machine.go:88] provisioning docker machine ...
	I0103 19:59:06.262895   46928 main.go:141] libmachine: (pause-705639) Calling .DriverName
	I0103 19:59:06.263096   46928 main.go:141] libmachine: (pause-705639) Calling .GetMachineName
	I0103 19:59:06.263268   46928 buildroot.go:166] provisioning hostname "pause-705639"
	I0103 19:59:06.263288   46928 main.go:141] libmachine: (pause-705639) Calling .GetMachineName
	I0103 19:59:06.263436   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHHostname
	I0103 19:59:06.266528   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:06.266939   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:06.266974   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:06.267136   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHPort
	I0103 19:59:06.267306   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:06.267447   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:06.267587   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHUsername
	I0103 19:59:06.267762   46928 main.go:141] libmachine: Using SSH client type: native
	I0103 19:59:06.268282   46928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.83.234 22 <nil> <nil>}
	I0103 19:59:06.268305   46928 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-705639 && echo "pause-705639" | sudo tee /etc/hostname
	I0103 19:59:06.428933   46928 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-705639
	
	I0103 19:59:06.428968   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHHostname
	I0103 19:59:06.432625   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:06.433143   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:06.433192   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:06.433359   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHPort
	I0103 19:59:06.433615   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:06.433868   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:06.434114   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHUsername
	I0103 19:59:06.434370   46928 main.go:141] libmachine: Using SSH client type: native
	I0103 19:59:06.434861   46928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.83.234 22 <nil> <nil>}
	I0103 19:59:06.434892   46928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-705639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-705639/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-705639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:59:06.560296   46928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:59:06.560319   46928 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 19:59:06.560335   46928 buildroot.go:174] setting up certificates
	I0103 19:59:06.560345   46928 provision.go:83] configureAuth start
	I0103 19:59:06.560358   46928 main.go:141] libmachine: (pause-705639) Calling .GetMachineName
	I0103 19:59:06.560620   46928 main.go:141] libmachine: (pause-705639) Calling .GetIP
	I0103 19:59:06.563611   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:06.564034   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:06.564065   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:06.564262   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHHostname
	I0103 19:59:06.566681   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:06.567085   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:06.567109   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:06.567279   46928 provision.go:138] copyHostCerts
	I0103 19:59:06.567366   46928 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 19:59:06.567404   46928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 19:59:06.567489   46928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 19:59:06.567617   46928 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 19:59:06.567629   46928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 19:59:06.567678   46928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 19:59:06.567766   46928 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 19:59:06.567776   46928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 19:59:06.567805   46928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 19:59:06.567884   46928 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.pause-705639 san=[192.168.83.234 192.168.83.234 localhost 127.0.0.1 minikube pause-705639]
	I0103 19:59:07.003191   46928 provision.go:172] copyRemoteCerts
	I0103 19:59:07.003268   46928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:59:07.003308   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHHostname
	I0103 19:59:07.006603   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:07.006994   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:07.007072   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:07.007219   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHPort
	I0103 19:59:07.007452   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:07.007678   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHUsername
	I0103 19:59:07.007814   46928 sshutil.go:53] new ssh client: &{IP:192.168.83.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/pause-705639/id_rsa Username:docker}
	I0103 19:59:07.096647   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0103 19:59:07.123938   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 19:59:07.154586   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:59:07.181063   46928 provision.go:86] duration metric: configureAuth took 620.70601ms
	I0103 19:59:07.181091   46928 buildroot.go:189] setting minikube options for container-runtime
	I0103 19:59:07.181350   46928 config.go:182] Loaded profile config "pause-705639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:59:07.181427   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHHostname
	I0103 19:59:07.184139   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:07.184590   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:07.184626   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:07.184857   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHPort
	I0103 19:59:07.185098   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:07.185340   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:07.185521   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHUsername
	I0103 19:59:07.185703   46928 main.go:141] libmachine: Using SSH client type: native
	I0103 19:59:07.186152   46928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.83.234 22 <nil> <nil>}
	I0103 19:59:07.186182   46928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:59:15.689483   46928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:59:15.689508   46928 machine.go:91] provisioned docker machine in 9.426622268s
	I0103 19:59:15.689520   46928 start.go:300] post-start starting for "pause-705639" (driver="kvm2")
	I0103 19:59:15.689532   46928 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:59:15.689554   46928 main.go:141] libmachine: (pause-705639) Calling .DriverName
	I0103 19:59:15.689846   46928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:59:15.689872   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHHostname
	I0103 19:59:15.692829   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.693227   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:15.693262   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.693430   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHPort
	I0103 19:59:15.693617   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:15.693772   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHUsername
	I0103 19:59:15.693861   46928 sshutil.go:53] new ssh client: &{IP:192.168.83.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/pause-705639/id_rsa Username:docker}
	I0103 19:59:15.787517   46928 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:59:15.792387   46928 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 19:59:15.792414   46928 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 19:59:15.792480   46928 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 19:59:15.792599   46928 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 19:59:15.792720   46928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:59:15.803619   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:59:15.829566   46928 start.go:303] post-start completed in 140.030674ms
	I0103 19:59:15.829594   46928 fix.go:56] fixHost completed within 9.593803491s
	I0103 19:59:15.829618   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHHostname
	I0103 19:59:15.832951   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.833396   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:15.833427   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.833609   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHPort
	I0103 19:59:15.833809   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:15.833946   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:15.834113   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHUsername
	I0103 19:59:15.834290   46928 main.go:141] libmachine: Using SSH client type: native
	I0103 19:59:15.834774   46928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.83.234 22 <nil> <nil>}
	I0103 19:59:15.834790   46928 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0103 19:59:15.952414   46928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704311955.949297063
	
	I0103 19:59:15.952435   46928 fix.go:206] guest clock: 1704311955.949297063
	I0103 19:59:15.952446   46928 fix.go:219] Guest: 2024-01-03 19:59:15.949297063 +0000 UTC Remote: 2024-01-03 19:59:15.829598467 +0000 UTC m=+49.453389746 (delta=119.698596ms)
	I0103 19:59:15.952469   46928 fix.go:190] guest clock delta is within tolerance: 119.698596ms
	I0103 19:59:15.952484   46928 start.go:83] releasing machines lock for "pause-705639", held for 9.716717269s
	I0103 19:59:15.952510   46928 main.go:141] libmachine: (pause-705639) Calling .DriverName
	I0103 19:59:15.952770   46928 main.go:141] libmachine: (pause-705639) Calling .GetIP
	I0103 19:59:15.956266   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.956667   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:15.956693   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.956860   46928 main.go:141] libmachine: (pause-705639) Calling .DriverName
	I0103 19:59:15.957419   46928 main.go:141] libmachine: (pause-705639) Calling .DriverName
	I0103 19:59:15.957654   46928 main.go:141] libmachine: (pause-705639) Calling .DriverName
	I0103 19:59:15.957881   46928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:59:15.957927   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHHostname
	I0103 19:59:15.958036   46928 ssh_runner.go:195] Run: cat /version.json
	I0103 19:59:15.958082   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHHostname
	I0103 19:59:15.961498   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.962472   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:15.962495   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.962863   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.962886   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHPort
	I0103 19:59:15.963080   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:15.963223   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHUsername
	I0103 19:59:15.963372   46928 sshutil.go:53] new ssh client: &{IP:192.168.83.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/pause-705639/id_rsa Username:docker}
	I0103 19:59:15.963940   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHPort
	I0103 19:59:15.963955   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:15.964149   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:15.964180   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHKeyPath
	I0103 19:59:15.964365   46928 main.go:141] libmachine: (pause-705639) Calling .GetSSHUsername
	I0103 19:59:15.964537   46928 sshutil.go:53] new ssh client: &{IP:192.168.83.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/pause-705639/id_rsa Username:docker}
	I0103 19:59:16.052076   46928 ssh_runner.go:195] Run: systemctl --version
	I0103 19:59:16.094800   46928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:59:16.253785   46928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 19:59:16.260009   46928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 19:59:16.260140   46928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:59:16.269328   46928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0103 19:59:16.269358   46928 start.go:475] detecting cgroup driver to use...
	I0103 19:59:16.269438   46928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:59:16.285168   46928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:59:16.298587   46928 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:59:16.298653   46928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:59:16.314053   46928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:59:16.332572   46928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:59:16.510045   46928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:59:17.057189   46928 docker.go:219] disabling docker service ...
	I0103 19:59:17.057322   46928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:59:17.150653   46928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:59:17.183510   46928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:59:17.448971   46928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:59:17.680045   46928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:59:17.717682   46928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:59:17.786818   46928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 19:59:17.786891   46928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:59:17.832971   46928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:59:17.833065   46928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:59:17.879105   46928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:59:17.904244   46928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:59:17.924728   46928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:59:17.947193   46928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:59:17.966161   46928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:59:17.982210   46928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:59:18.230752   46928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:59:19.454129   46928 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.223279522s)
	I0103 19:59:19.454164   46928 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:59:19.454238   46928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:59:19.461320   46928 start.go:543] Will wait 60s for crictl version
	I0103 19:59:19.461402   46928 ssh_runner.go:195] Run: which crictl
	I0103 19:59:19.465681   46928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:59:19.519956   46928 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 19:59:19.520096   46928 ssh_runner.go:195] Run: crio --version
	I0103 19:59:19.581686   46928 ssh_runner.go:195] Run: crio --version
	I0103 19:59:19.817698   46928 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 19:59:19.819288   46928 main.go:141] libmachine: (pause-705639) Calling .GetIP
	I0103 19:59:19.823017   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:19.823172   46928 main.go:141] libmachine: (pause-705639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:6d:2c", ip: ""} in network mk-pause-705639: {Iface:virbr3 ExpiryTime:2024-01-03 20:57:33 +0000 UTC Type:0 Mac:52:54:00:27:6d:2c Iaid: IPaddr:192.168.83.234 Prefix:24 Hostname:pause-705639 Clientid:01:52:54:00:27:6d:2c}
	I0103 19:59:19.823211   46928 main.go:141] libmachine: (pause-705639) DBG | domain pause-705639 has defined IP address 192.168.83.234 and MAC address 52:54:00:27:6d:2c in network mk-pause-705639
	I0103 19:59:19.823485   46928 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0103 19:59:19.844679   46928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:59:19.844756   46928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:59:20.023822   46928 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 19:59:20.023853   46928 crio.go:415] Images already preloaded, skipping extraction
	I0103 19:59:20.023925   46928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:59:20.107254   46928 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 19:59:20.107335   46928 cache_images.go:84] Images are preloaded, skipping loading
	I0103 19:59:20.107422   46928 ssh_runner.go:195] Run: crio config
	I0103 19:59:20.217725   46928 cni.go:84] Creating CNI manager for ""
	I0103 19:59:20.217814   46928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 19:59:20.217855   46928 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:59:20.217906   46928 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.234 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-705639 NodeName:pause-705639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 19:59:20.218164   46928 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-705639"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:59:20.218347   46928 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-705639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-705639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:59:20.218466   46928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 19:59:20.232183   46928 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:59:20.232321   46928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 19:59:20.244580   46928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0103 19:59:20.264556   46928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 19:59:20.290255   46928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0103 19:59:20.323420   46928 ssh_runner.go:195] Run: grep 192.168.83.234	control-plane.minikube.internal$ /etc/hosts
	I0103 19:59:20.333829   46928 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639 for IP: 192.168.83.234
	I0103 19:59:20.333873   46928 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:59:20.334040   46928 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 19:59:20.334097   46928 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 19:59:20.334185   46928 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/client.key
	I0103 19:59:20.334278   46928 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/apiserver.key.e5e90310
	I0103 19:59:20.334341   46928 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/proxy-client.key
	I0103 19:59:20.334485   46928 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 19:59:20.334536   46928 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 19:59:20.334550   46928 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:59:20.334585   46928 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:59:20.334617   46928 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:59:20.334655   46928 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 19:59:20.334716   46928 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 19:59:20.335549   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 19:59:20.372318   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 19:59:20.425897   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 19:59:20.473697   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 19:59:20.516065   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:59:20.553310   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 19:59:20.590513   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:59:20.629493   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 19:59:20.668588   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 19:59:20.742250   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 19:59:20.787661   46928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:59:20.835209   46928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 19:59:20.870803   46928 ssh_runner.go:195] Run: openssl version
	I0103 19:59:20.885708   46928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 19:59:20.908505   46928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 19:59:20.925767   46928 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 19:59:20.925839   46928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 19:59:20.950571   46928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 19:59:20.990282   46928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 19:59:21.023209   46928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 19:59:21.030216   46928 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 19:59:21.030278   46928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 19:59:21.056941   46928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:59:21.113718   46928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:59:21.164703   46928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:59:21.179753   46928 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:59:21.179830   46928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:59:21.194133   46928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:59:21.227887   46928 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:59:21.236720   46928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 19:59:21.246545   46928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 19:59:21.254101   46928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 19:59:21.260587   46928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 19:59:21.267057   46928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 19:59:21.274645   46928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 19:59:21.281887   46928 kubeadm.go:404] StartCluster: {Name:pause-705639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:pause-705639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.234 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:59:21.282052   46928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 19:59:21.282124   46928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 19:59:21.350486   46928 cri.go:89] found id: "57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096"
	I0103 19:59:21.350530   46928 cri.go:89] found id: "71e8917964e7311be804e465985303159c039b98c9dc3bffc4cf1e7b4261a093"
	I0103 19:59:21.350538   46928 cri.go:89] found id: "a7adba197602b612e4e392bde6cf7da3163202a19b6e532fbbf4b7bc37404f58"
	I0103 19:59:21.350544   46928 cri.go:89] found id: "9823d230f9bd71c8d52f916093de16d06baac08bd5828965ead44ac26ed9ffec"
	I0103 19:59:21.350557   46928 cri.go:89] found id: "63d620f20d76e31a4e487465f3d484c8ce1f83e550fe6ce29d5a320b47ce97a0"
	I0103 19:59:21.350564   46928 cri.go:89] found id: "7e31a8a2e842b303278e3d61d90fed93c19952cd17d17428cffae359dd63c732"
	I0103 19:59:21.350569   46928 cri.go:89] found id: ""
	I0103 19:59:21.350636   46928 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-705639 -n pause-705639
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-705639 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-705639 logs -n 25: (1.536118177s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | status kubelet --all --full                          |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat kubelet --no-pager                               |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo journalctl                       | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | -xeu kubelet --all --full                            |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | status docker --all --full                           |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo docker                           | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo                                  | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo containerd                       | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo find                             | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| delete  | -p auto-719541                                       | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	| start   | -p calico-719541 --memory=3072                       | calico-719541  | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-719541 pgrep -a                           | kindnet-719541 | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | kubelet                                              |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:59:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:59:50.094780   48849 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:59:50.094894   48849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:59:50.094899   48849 out.go:309] Setting ErrFile to fd 2...
	I0103 19:59:50.094903   48849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:59:50.095140   48849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:59:50.095703   48849 out.go:303] Setting JSON to false
	I0103 19:59:50.096733   48849 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6137,"bootTime":1704305853,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:59:50.096801   48849 start.go:138] virtualization: kvm guest
	I0103 19:59:50.099239   48849 out.go:177] * [calico-719541] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:59:50.100545   48849 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:59:50.100552   48849 notify.go:220] Checking for updates...
	I0103 19:59:50.103406   48849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:59:50.104757   48849 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:59:50.106025   48849 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:59:50.107290   48849 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:59:50.108536   48849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:59:50.110211   48849 config.go:182] Loaded profile config "kindnet-719541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:59:50.110361   48849 config.go:182] Loaded profile config "pause-705639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:59:50.110430   48849 config.go:182] Loaded profile config "stopped-upgrade-857735": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0103 19:59:50.110533   48849 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:59:50.148426   48849 out.go:177] * Using the kvm2 driver based on user configuration
	I0103 19:59:50.149951   48849 start.go:298] selected driver: kvm2
	I0103 19:59:50.149964   48849 start.go:902] validating driver "kvm2" against <nil>
	I0103 19:59:50.149980   48849 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:59:50.150755   48849 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:59:50.150861   48849 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 19:59:50.165635   48849 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 19:59:50.165691   48849 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 19:59:50.165886   48849 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 19:59:50.165943   48849 cni.go:84] Creating CNI manager for "calico"
	I0103 19:59:50.165956   48849 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I0103 19:59:50.165965   48849 start_flags.go:323] config:
	{Name:calico-719541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-719541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:59:50.166095   48849 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:59:50.168050   48849 out.go:177] * Starting control plane node calico-719541 in cluster calico-719541
	I0103 19:59:50.169620   48849 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:59:50.169656   48849 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 19:59:50.169665   48849 cache.go:56] Caching tarball of preloaded images
	I0103 19:59:50.169777   48849 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:59:50.169792   48849 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:59:50.169918   48849 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/config.json ...
	I0103 19:59:50.169942   48849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/config.json: {Name:mkbb1a2e8d8fc93b31f76881d0e7f9131f3b648a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:59:50.170069   48849 start.go:365] acquiring machines lock for calico-719541: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:59:50.170096   48849 start.go:369] acquired machines lock for "calico-719541" in 14.82µs
	I0103 19:59:50.170119   48849 start.go:93] Provisioning new machine with config: &{Name:calico-719541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:calico-719541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:59:50.170190   48849 start.go:125] createHost starting for "" (driver="kvm2")
	I0103 19:59:48.274096   46928 pod_ready.go:102] pod "etcd-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:50.772593   46928 pod_ready.go:102] pod "etcd-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:51.275711   46928 pod_ready.go:92] pod "etcd-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:51.275745   46928 pod_ready.go:81] duration metric: took 5.010611315s waiting for pod "etcd-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:51.275757   46928 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:50.171940   48849 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0103 19:59:50.172060   48849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:59:50.172122   48849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:59:50.186589   48849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39833
	I0103 19:59:50.187029   48849 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:59:50.187569   48849 main.go:141] libmachine: Using API Version  1
	I0103 19:59:50.187593   48849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:59:50.187945   48849 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:59:50.188129   48849 main.go:141] libmachine: (calico-719541) Calling .GetMachineName
	I0103 19:59:50.188250   48849 main.go:141] libmachine: (calico-719541) Calling .DriverName
	I0103 19:59:50.188379   48849 start.go:159] libmachine.API.Create for "calico-719541" (driver="kvm2")
	I0103 19:59:50.188414   48849 client.go:168] LocalClient.Create starting
	I0103 19:59:50.188455   48849 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem
	I0103 19:59:50.188489   48849 main.go:141] libmachine: Decoding PEM data...
	I0103 19:59:50.188505   48849 main.go:141] libmachine: Parsing certificate...
	I0103 19:59:50.188556   48849 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem
	I0103 19:59:50.188574   48849 main.go:141] libmachine: Decoding PEM data...
	I0103 19:59:50.188587   48849 main.go:141] libmachine: Parsing certificate...
	I0103 19:59:50.188602   48849 main.go:141] libmachine: Running pre-create checks...
	I0103 19:59:50.188610   48849 main.go:141] libmachine: (calico-719541) Calling .PreCreateCheck
	I0103 19:59:50.188901   48849 main.go:141] libmachine: (calico-719541) Calling .GetConfigRaw
	I0103 19:59:50.189260   48849 main.go:141] libmachine: Creating machine...
	I0103 19:59:50.189273   48849 main.go:141] libmachine: (calico-719541) Calling .Create
	I0103 19:59:50.189433   48849 main.go:141] libmachine: (calico-719541) Creating KVM machine...
	I0103 19:59:50.190501   48849 main.go:141] libmachine: (calico-719541) DBG | found existing default KVM network
	I0103 19:59:50.191941   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.191795   48872 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f840}
	I0103 19:59:50.197011   48849 main.go:141] libmachine: (calico-719541) DBG | trying to create private KVM network mk-calico-719541 192.168.39.0/24...
	I0103 19:59:50.276380   48849 main.go:141] libmachine: (calico-719541) DBG | private KVM network mk-calico-719541 192.168.39.0/24 created
	I0103 19:59:50.276428   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.276287   48872 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:59:50.276443   48849 main.go:141] libmachine: (calico-719541) Setting up store path in /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541 ...
	I0103 19:59:50.276464   48849 main.go:141] libmachine: (calico-719541) Building disk image from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 19:59:50.276486   48849 main.go:141] libmachine: (calico-719541) Downloading /home/jenkins/minikube-integration/17885-9609/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0103 19:59:50.506157   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.506007   48872 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541/id_rsa...
	I0103 19:59:50.608295   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.608186   48872 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541/calico-719541.rawdisk...
	I0103 19:59:50.608322   48849 main.go:141] libmachine: (calico-719541) DBG | Writing magic tar header
	I0103 19:59:50.608335   48849 main.go:141] libmachine: (calico-719541) DBG | Writing SSH key tar header
	I0103 19:59:50.608344   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.608308   48872 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541 ...
	I0103 19:59:50.608432   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541
	I0103 19:59:50.608474   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines
	I0103 19:59:50.608492   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541 (perms=drwx------)
	I0103 19:59:50.608510   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines (perms=drwxr-xr-x)
	I0103 19:59:50.608525   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube (perms=drwxr-xr-x)
	I0103 19:59:50.608540   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609 (perms=drwxrwxr-x)
	I0103 19:59:50.608550   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:59:50.608557   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0103 19:59:50.608568   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609
	I0103 19:59:50.608582   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0103 19:59:50.608596   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins
	I0103 19:59:50.608607   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0103 19:59:50.608623   48849 main.go:141] libmachine: (calico-719541) Creating domain...
	I0103 19:59:50.608674   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home
	I0103 19:59:50.608694   48849 main.go:141] libmachine: (calico-719541) DBG | Skipping /home - not owner
	I0103 19:59:50.609659   48849 main.go:141] libmachine: (calico-719541) define libvirt domain using xml: 
	I0103 19:59:50.609683   48849 main.go:141] libmachine: (calico-719541) <domain type='kvm'>
	I0103 19:59:50.609694   48849 main.go:141] libmachine: (calico-719541)   <name>calico-719541</name>
	I0103 19:59:50.609706   48849 main.go:141] libmachine: (calico-719541)   <memory unit='MiB'>3072</memory>
	I0103 19:59:50.609720   48849 main.go:141] libmachine: (calico-719541)   <vcpu>2</vcpu>
	I0103 19:59:50.609732   48849 main.go:141] libmachine: (calico-719541)   <features>
	I0103 19:59:50.609741   48849 main.go:141] libmachine: (calico-719541)     <acpi/>
	I0103 19:59:50.609754   48849 main.go:141] libmachine: (calico-719541)     <apic/>
	I0103 19:59:50.609767   48849 main.go:141] libmachine: (calico-719541)     <pae/>
	I0103 19:59:50.609777   48849 main.go:141] libmachine: (calico-719541)     
	I0103 19:59:50.609799   48849 main.go:141] libmachine: (calico-719541)   </features>
	I0103 19:59:50.609820   48849 main.go:141] libmachine: (calico-719541)   <cpu mode='host-passthrough'>
	I0103 19:59:50.609838   48849 main.go:141] libmachine: (calico-719541)   
	I0103 19:59:50.609843   48849 main.go:141] libmachine: (calico-719541)   </cpu>
	I0103 19:59:50.609849   48849 main.go:141] libmachine: (calico-719541)   <os>
	I0103 19:59:50.609864   48849 main.go:141] libmachine: (calico-719541)     <type>hvm</type>
	I0103 19:59:50.609873   48849 main.go:141] libmachine: (calico-719541)     <boot dev='cdrom'/>
	I0103 19:59:50.609878   48849 main.go:141] libmachine: (calico-719541)     <boot dev='hd'/>
	I0103 19:59:50.609884   48849 main.go:141] libmachine: (calico-719541)     <bootmenu enable='no'/>
	I0103 19:59:50.609891   48849 main.go:141] libmachine: (calico-719541)   </os>
	I0103 19:59:50.609897   48849 main.go:141] libmachine: (calico-719541)   <devices>
	I0103 19:59:50.609905   48849 main.go:141] libmachine: (calico-719541)     <disk type='file' device='cdrom'>
	I0103 19:59:50.609918   48849 main.go:141] libmachine: (calico-719541)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541/boot2docker.iso'/>
	I0103 19:59:50.609928   48849 main.go:141] libmachine: (calico-719541)       <target dev='hdc' bus='scsi'/>
	I0103 19:59:50.609941   48849 main.go:141] libmachine: (calico-719541)       <readonly/>
	I0103 19:59:50.609958   48849 main.go:141] libmachine: (calico-719541)     </disk>
	I0103 19:59:50.609975   48849 main.go:141] libmachine: (calico-719541)     <disk type='file' device='disk'>
	I0103 19:59:50.609989   48849 main.go:141] libmachine: (calico-719541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0103 19:59:50.610028   48849 main.go:141] libmachine: (calico-719541)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541/calico-719541.rawdisk'/>
	I0103 19:59:50.610063   48849 main.go:141] libmachine: (calico-719541)       <target dev='hda' bus='virtio'/>
	I0103 19:59:50.610079   48849 main.go:141] libmachine: (calico-719541)     </disk>
	I0103 19:59:50.610091   48849 main.go:141] libmachine: (calico-719541)     <interface type='network'>
	I0103 19:59:50.610103   48849 main.go:141] libmachine: (calico-719541)       <source network='mk-calico-719541'/>
	I0103 19:59:50.610115   48849 main.go:141] libmachine: (calico-719541)       <model type='virtio'/>
	I0103 19:59:50.610125   48849 main.go:141] libmachine: (calico-719541)     </interface>
	I0103 19:59:50.610134   48849 main.go:141] libmachine: (calico-719541)     <interface type='network'>
	I0103 19:59:50.610145   48849 main.go:141] libmachine: (calico-719541)       <source network='default'/>
	I0103 19:59:50.610158   48849 main.go:141] libmachine: (calico-719541)       <model type='virtio'/>
	I0103 19:59:50.610172   48849 main.go:141] libmachine: (calico-719541)     </interface>
	I0103 19:59:50.610184   48849 main.go:141] libmachine: (calico-719541)     <serial type='pty'>
	I0103 19:59:50.610194   48849 main.go:141] libmachine: (calico-719541)       <target port='0'/>
	I0103 19:59:50.610205   48849 main.go:141] libmachine: (calico-719541)     </serial>
	I0103 19:59:50.610217   48849 main.go:141] libmachine: (calico-719541)     <console type='pty'>
	I0103 19:59:50.610235   48849 main.go:141] libmachine: (calico-719541)       <target type='serial' port='0'/>
	I0103 19:59:50.610273   48849 main.go:141] libmachine: (calico-719541)     </console>
	I0103 19:59:50.610292   48849 main.go:141] libmachine: (calico-719541)     <rng model='virtio'>
	I0103 19:59:50.610303   48849 main.go:141] libmachine: (calico-719541)       <backend model='random'>/dev/random</backend>
	I0103 19:59:50.610312   48849 main.go:141] libmachine: (calico-719541)     </rng>
	I0103 19:59:50.610318   48849 main.go:141] libmachine: (calico-719541)     
	I0103 19:59:50.610335   48849 main.go:141] libmachine: (calico-719541)     
	I0103 19:59:50.610356   48849 main.go:141] libmachine: (calico-719541)   </devices>
	I0103 19:59:50.610370   48849 main.go:141] libmachine: (calico-719541) </domain>
	I0103 19:59:50.610380   48849 main.go:141] libmachine: (calico-719541) 
	I0103 19:59:50.614719   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:be:c8:76 in network default
	I0103 19:59:50.615260   48849 main.go:141] libmachine: (calico-719541) Ensuring networks are active...
	I0103 19:59:50.615293   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:50.615986   48849 main.go:141] libmachine: (calico-719541) Ensuring network default is active
	I0103 19:59:50.616274   48849 main.go:141] libmachine: (calico-719541) Ensuring network mk-calico-719541 is active
	I0103 19:59:50.616779   48849 main.go:141] libmachine: (calico-719541) Getting domain xml...
	I0103 19:59:50.617373   48849 main.go:141] libmachine: (calico-719541) Creating domain...
	I0103 19:59:51.880099   48849 main.go:141] libmachine: (calico-719541) Waiting to get IP...
	I0103 19:59:51.880756   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:51.881201   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:51.881238   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:51.881187   48872 retry.go:31] will retry after 289.795257ms: waiting for machine to come up
	I0103 19:59:52.172693   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:52.173261   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:52.173291   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:52.173213   48872 retry.go:31] will retry after 294.821334ms: waiting for machine to come up
	I0103 19:59:52.469691   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:52.470135   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:52.470173   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:52.470092   48872 retry.go:31] will retry after 375.646278ms: waiting for machine to come up
	I0103 19:59:52.847532   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:52.848107   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:52.848135   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:52.848065   48872 retry.go:31] will retry after 382.660652ms: waiting for machine to come up
	I0103 19:59:53.232569   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:53.233057   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:53.233088   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:53.233015   48872 retry.go:31] will retry after 522.321303ms: waiting for machine to come up
	I0103 19:59:53.756545   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:53.757098   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:53.757131   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:53.757041   48872 retry.go:31] will retry after 694.331165ms: waiting for machine to come up
	I0103 19:59:54.453074   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:54.453615   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:54.453644   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:54.453533   48872 retry.go:31] will retry after 1.049710109s: waiting for machine to come up
	I0103 19:59:53.283961   46928 pod_ready.go:102] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:55.285816   46928 pod_ready.go:102] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:57.784820   46928 pod_ready.go:102] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:58.791842   46928 pod_ready.go:92] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:58.791866   46928 pod_ready.go:81] duration metric: took 7.51610105s waiting for pod "kube-apiserver-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.791878   46928 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.803930   46928 pod_ready.go:92] pod "kube-controller-manager-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:58.803950   46928 pod_ready.go:81] duration metric: took 12.065134ms waiting for pod "kube-controller-manager-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.803959   46928 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lwbnd" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.812295   46928 pod_ready.go:92] pod "kube-proxy-lwbnd" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:58.812315   46928 pod_ready.go:81] duration metric: took 8.351094ms waiting for pod "kube-proxy-lwbnd" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.812323   46928 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.820307   46928 pod_ready.go:92] pod "kube-scheduler-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:58.820328   46928 pod_ready.go:81] duration metric: took 7.998404ms waiting for pod "kube-scheduler-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.820338   46928 pod_ready.go:38] duration metric: took 12.570692536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:59:58.820356   46928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 19:59:58.832675   46928 ops.go:34] apiserver oom_adj: -16
	I0103 19:59:58.832696   46928 kubeadm.go:640] restartCluster took 37.373494615s
	I0103 19:59:58.832705   46928 kubeadm.go:406] StartCluster complete in 37.550826018s
	I0103 19:59:58.832725   46928 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:59:58.832813   46928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:59:58.833695   46928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:59:58.833932   46928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 19:59:58.833965   46928 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 19:59:58.835927   46928 out.go:177] * Enabled addons: 
	I0103 19:59:58.834180   46928 config.go:182] Loaded profile config "pause-705639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:59:58.834752   46928 kapi.go:59] client config for pause-705639: &rest.Config{Host:"https://192.168.83.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:59:58.837508   46928 addons.go:508] enable addons completed in 3.551926ms: enabled=[]
	I0103 19:59:58.840765   46928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-705639" context rescaled to 1 replicas
	I0103 19:59:58.840806   46928 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.234 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:59:58.842635   46928 out.go:177] * Verifying Kubernetes components...
	I0103 19:59:55.506623   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:55.507135   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:55.507160   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:55.507071   48872 retry.go:31] will retry after 1.129495665s: waiting for machine to come up
	I0103 19:59:56.638537   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:56.639029   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:56.639061   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:56.638973   48872 retry.go:31] will retry after 1.563343867s: waiting for machine to come up
	I0103 19:59:58.203747   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:58.204209   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:58.204239   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:58.204179   48872 retry.go:31] will retry after 2.070449561s: waiting for machine to come up
	I0103 19:59:58.843952   46928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:59:58.958681   46928 node_ready.go:35] waiting up to 6m0s for node "pause-705639" to be "Ready" ...
	I0103 19:59:58.959089   46928 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 19:59:58.964497   46928 node_ready.go:49] node "pause-705639" has status "Ready":"True"
	I0103 19:59:58.964529   46928 node_ready.go:38] duration metric: took 5.770132ms waiting for node "pause-705639" to be "Ready" ...
	I0103 19:59:58.964540   46928 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:59:58.971047   46928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fkkp5" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.182776   46928 pod_ready.go:92] pod "coredns-5dd5756b68-fkkp5" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:59.182806   46928 pod_ready.go:81] duration metric: took 211.687047ms waiting for pod "coredns-5dd5756b68-fkkp5" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.182821   46928 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.581989   46928 pod_ready.go:92] pod "etcd-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:59.582014   46928 pod_ready.go:81] duration metric: took 399.18598ms waiting for pod "etcd-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.582024   46928 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.982161   46928 pod_ready.go:92] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:59.982189   46928 pod_ready.go:81] duration metric: took 400.157537ms waiting for pod "kube-apiserver-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.982202   46928 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:00.381104   46928 pod_ready.go:92] pod "kube-controller-manager-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 20:00:00.381135   46928 pod_ready.go:81] duration metric: took 398.923061ms waiting for pod "kube-controller-manager-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:00.381150   46928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwbnd" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:00.781832   46928 pod_ready.go:92] pod "kube-proxy-lwbnd" in "kube-system" namespace has status "Ready":"True"
	I0103 20:00:00.781862   46928 pod_ready.go:81] duration metric: took 400.70431ms waiting for pod "kube-proxy-lwbnd" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:00.781873   46928 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:01.182444   46928 pod_ready.go:92] pod "kube-scheduler-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 20:00:01.182470   46928 pod_ready.go:81] duration metric: took 400.591162ms waiting for pod "kube-scheduler-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:01.182479   46928 pod_ready.go:38] duration metric: took 2.217928363s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:00:01.182492   46928 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:00:01.182561   46928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:00:01.196343   46928 api_server.go:72] duration metric: took 2.35550047s to wait for apiserver process to appear ...
	I0103 20:00:01.196386   46928 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:00:01.196413   46928 api_server.go:253] Checking apiserver healthz at https://192.168.83.234:8443/healthz ...
	I0103 20:00:01.204537   46928 api_server.go:279] https://192.168.83.234:8443/healthz returned 200:
	ok
	I0103 20:00:01.206442   46928 api_server.go:141] control plane version: v1.28.4
	I0103 20:00:01.206472   46928 api_server.go:131] duration metric: took 10.07776ms to wait for apiserver health ...
	I0103 20:00:01.206484   46928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:00:01.385721   46928 system_pods.go:59] 6 kube-system pods found
	I0103 20:00:01.385750   46928 system_pods.go:61] "coredns-5dd5756b68-fkkp5" [9226d155-0c50-444f-9899-7c425b5ea32e] Running
	I0103 20:00:01.385755   46928 system_pods.go:61] "etcd-pause-705639" [7cbb84a5-dfc6-4150-ba57-0a6c00c22a63] Running
	I0103 20:00:01.385759   46928 system_pods.go:61] "kube-apiserver-pause-705639" [388802b7-66b9-4b93-90b6-69533f6808f3] Running
	I0103 20:00:01.385764   46928 system_pods.go:61] "kube-controller-manager-pause-705639" [44057709-897c-4c0c-a6a4-477a83fdb68f] Running
	I0103 20:00:01.385767   46928 system_pods.go:61] "kube-proxy-lwbnd" [2dbfaa3d-dc71-48ee-9746-357990a3b6b5] Running
	I0103 20:00:01.385771   46928 system_pods.go:61] "kube-scheduler-pause-705639" [f5b3ab0d-2f5b-4cb9-a85a-0b007edf09fa] Running
	I0103 20:00:01.385777   46928 system_pods.go:74] duration metric: took 179.286866ms to wait for pod list to return data ...
	I0103 20:00:01.385784   46928 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:00:01.581339   46928 default_sa.go:45] found service account: "default"
	I0103 20:00:01.581368   46928 default_sa.go:55] duration metric: took 195.579067ms for default service account to be created ...
	I0103 20:00:01.581380   46928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:00:01.785084   46928 system_pods.go:86] 6 kube-system pods found
	I0103 20:00:01.785120   46928 system_pods.go:89] "coredns-5dd5756b68-fkkp5" [9226d155-0c50-444f-9899-7c425b5ea32e] Running
	I0103 20:00:01.785129   46928 system_pods.go:89] "etcd-pause-705639" [7cbb84a5-dfc6-4150-ba57-0a6c00c22a63] Running
	I0103 20:00:01.785136   46928 system_pods.go:89] "kube-apiserver-pause-705639" [388802b7-66b9-4b93-90b6-69533f6808f3] Running
	I0103 20:00:01.785143   46928 system_pods.go:89] "kube-controller-manager-pause-705639" [44057709-897c-4c0c-a6a4-477a83fdb68f] Running
	I0103 20:00:01.785149   46928 system_pods.go:89] "kube-proxy-lwbnd" [2dbfaa3d-dc71-48ee-9746-357990a3b6b5] Running
	I0103 20:00:01.785158   46928 system_pods.go:89] "kube-scheduler-pause-705639" [f5b3ab0d-2f5b-4cb9-a85a-0b007edf09fa] Running
	I0103 20:00:01.785167   46928 system_pods.go:126] duration metric: took 203.780978ms to wait for k8s-apps to be running ...
	I0103 20:00:01.785180   46928 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:00:01.785233   46928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:00:01.802961   46928 system_svc.go:56] duration metric: took 17.76927ms WaitForService to wait for kubelet.
	I0103 20:00:01.802989   46928 kubeadm.go:581] duration metric: took 2.962157236s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:00:01.803008   46928 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:00:01.982111   46928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:00:01.982147   46928 node_conditions.go:123] node cpu capacity is 2
	I0103 20:00:01.982161   46928 node_conditions.go:105] duration metric: took 179.147132ms to run NodePressure ...
	I0103 20:00:01.982175   46928 start.go:228] waiting for startup goroutines ...
	I0103 20:00:01.982184   46928 start.go:233] waiting for cluster config update ...
	I0103 20:00:01.982194   46928 start.go:242] writing updated cluster config ...
	I0103 20:00:01.982569   46928 ssh_runner.go:195] Run: rm -f paused
	I0103 20:00:02.044628   46928 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:00:02.047096   46928 out.go:177] * Done! kubectl is now configured to use "pause-705639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 19:57:30 UTC, ends at Wed 2024-01-03 20:00:03 UTC. --
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.900973561Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9eaf9a3d-c179-49bd-92df-9dc830470e1c name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.902095545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d435371b-8598-48cc-a8a4-e743426075a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.902422278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704312002902411370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=d435371b-8598-48cc-a8a4-e743426075a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.903025642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=328f4e0c-43f5-4558-86a5-c4f978a1fc9d name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.903074604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=328f4e0c-43f5-4558-86a5-c4f978a1fc9d name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.903613067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704311985082791945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704311985109015257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash: cb6296c2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704311979480811724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694
dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704311979448310234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704311979505632130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09,PodSandboxId:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704311979423584507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,},Annotations:map[string]s
tring{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704311971519238479,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash:
cb6296c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704311962081684534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704311961469083325,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]string{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704311961052853887,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_EXITED,CreatedAt:1704311961121112356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096,PodSandboxId:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704311957809683313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835
980ecc42799c1960492dad7b,},Annotations:map[string]string{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=328f4e0c-43f5-4558-86a5-c4f978a1fc9d name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.959446899Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0fc08f13-fa37-4884-a7a5-97e8bc0e40ab name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.959577396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0fc08f13-fa37-4884-a7a5-97e8bc0e40ab name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.963156148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=20297ff0-a002-48da-ace4-a9560304362e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.963577529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704312002963560968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=20297ff0-a002-48da-ace4-a9560304362e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.964786879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=878bc26f-dc5e-418d-97e4-32a8278f950d name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.964897238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=878bc26f-dc5e-418d-97e4-32a8278f950d name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:02 pause-705639 crio[2401]: time="2024-01-03 20:00:02.965207273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704311985082791945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704311985109015257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash: cb6296c2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704311979480811724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694
dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704311979448310234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704311979505632130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09,PodSandboxId:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704311979423584507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,},Annotations:map[string]s
tring{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704311971519238479,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash:
cb6296c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704311962081684534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704311961469083325,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]string{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704311961052853887,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_EXITED,CreatedAt:1704311961121112356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096,PodSandboxId:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704311957809683313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835
980ecc42799c1960492dad7b,},Annotations:map[string]string{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=878bc26f-dc5e-418d-97e4-32a8278f950d name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.014973303Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=36f80cf4-9df5-4acc-8284-2fd3b4e2f2ab name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.015241691Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&PodSandboxMetadata{Name:kube-proxy-lwbnd,Uid:2dbfaa3d-dc71-48ee-9746-357990a3b6b5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1704311971191197477,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:58:21.394775153Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-fkkp5,Uid:9226d155-0c50-444f-9899-7c425b5ea32e,N
amespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959857159634,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:58:21.719900704Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-705639,Uid:cd9c59c6d56232bfd7011bd5817fde97,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959851978510,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: cd9c59c6d56232bfd7011bd5817fde97,kubernetes.io/config.seen: 2024-01-03T19:58:08.085017950Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-705639,Uid:85e9694dda2f6e2035142215b093a9d6,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959773622259,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 85e9694dda2f6e2035142215b093a9d6,kubernetes.io/config.seen: 2024-01-03T19:58:08.085011331Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&PodSan
dboxMetadata{Name:etcd-pause-705639,Uid:1bf83f638fa3405cd004c68f3ff4d378,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959728110767,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.234:2379,kubernetes.io/config.hash: 1bf83f638fa3405cd004c68f3ff4d378,kubernetes.io/config.seen: 2024-01-03T19:58:08.085015373Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-705639,Uid:b0173835980ecc42799c1960492dad7b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959711213566,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.234:8443,kubernetes.io/config.hash: b0173835980ecc42799c1960492dad7b,kubernetes.io/config.seen: 2024-01-03T19:58:08.085016821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-705639,Uid:b0173835980ecc42799c1960492dad7b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1704311956739821209,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernete
s.io/kube-apiserver.advertise-address.endpoint: 192.168.83.234:8443,kubernetes.io/config.hash: b0173835980ecc42799c1960492dad7b,kubernetes.io/config.seen: 2024-01-03T19:58:08.085016821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=36f80cf4-9df5-4acc-8284-2fd3b4e2f2ab name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.016260952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=980948c1-c7aa-4bc0-8a4d-83b83c0c801a name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.016314677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=980948c1-c7aa-4bc0-8a4d-83b83c0c801a name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.016598863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704311985082791945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704311985109015257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash: cb6296c2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704311979480811724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694
dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704311979448310234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704311979505632130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09,PodSandboxId:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704311979423584507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,},Annotations:map[string]s
tring{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704311971519238479,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash:
cb6296c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704311962081684534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704311961469083325,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]string{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704311961052853887,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_EXITED,CreatedAt:1704311961121112356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096,PodSandboxId:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704311957809683313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835
980ecc42799c1960492dad7b,},Annotations:map[string]string{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=980948c1-c7aa-4bc0-8a4d-83b83c0c801a name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.024457796Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b7a2da6d-e1c3-4dca-b250-dda388c4cca7 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.024594117Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b7a2da6d-e1c3-4dca-b250-dda388c4cca7 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.026047821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=013e9340-1255-4845-b68b-183662685a70 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.026448438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704312003026432669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=013e9340-1255-4845-b68b-183662685a70 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.027328696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=98595fd2-f886-4d9e-b6a8-e4a64ef4bc6b name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.027397006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=98595fd2-f886-4d9e-b6a8-e4a64ef4bc6b name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:03 pause-705639 crio[2401]: time="2024-01-03 20:00:03.027713497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704311985082791945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704311985109015257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash: cb6296c2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704311979480811724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694
dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704311979448310234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704311979505632130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09,PodSandboxId:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704311979423584507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,},Annotations:map[string]s
tring{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704311971519238479,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash:
cb6296c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704311962081684534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704311961469083325,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]string{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704311961052853887,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_EXITED,CreatedAt:1704311961121112356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096,PodSandboxId:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704311957809683313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835
980ecc42799c1960492dad7b,},Annotations:map[string]string{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=98595fd2-f886-4d9e-b6a8-e4a64ef4bc6b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4e78cd600ca5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   18 seconds ago      Running             kube-proxy                2                   4cd50b769d4b7       kube-proxy-lwbnd
	455434fe7c018       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago      Running             coredns                   2                   7d9ea3e3e063f       coredns-5dd5756b68-fkkp5
	77bb535231c22       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   23 seconds ago      Running             kube-controller-manager   2                   1da6fb4cd1f94       kube-controller-manager-pause-705639
	a33080889b8bb       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   23 seconds ago      Running             kube-scheduler            2                   b3fd2466af4f4       kube-scheduler-pause-705639
	b54a6b8473f94       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   23 seconds ago      Running             etcd                      2                   586c5324ad955       etcd-pause-705639
	6645f9e298bda       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   23 seconds ago      Running             kube-apiserver            2                   0ff251fadd26f       kube-apiserver-pause-705639
	903578371b774       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   31 seconds ago      Exited              kube-proxy                1                   4cd50b769d4b7       kube-proxy-lwbnd
	101176aa35ad2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   41 seconds ago      Exited              coredns                   1                   7d9ea3e3e063f       coredns-5dd5756b68-fkkp5
	57bfa544d32df       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   41 seconds ago      Exited              etcd                      1                   586c5324ad955       etcd-pause-705639
	a1c7f60e2d0e1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   41 seconds ago      Exited              kube-controller-manager   1                   1da6fb4cd1f94       kube-controller-manager-pause-705639
	6c2237d818c60       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   42 seconds ago      Exited              kube-scheduler            1                   b3fd2466af4f4       kube-scheduler-pause-705639
	57dcb0ae11754       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   45 seconds ago      Exited              kube-apiserver            1                   a0ddc31981afb       kube-apiserver-pause-705639
	
	
	==> coredns [101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48941 - 42552 "HINFO IN 6871104697284162923.1326580057340223885. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010157982s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46897 - 36895 "HINFO IN 2621362529278284310.5964583020983758366. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011387417s
	
	
	==> describe nodes <==
	Name:               pause-705639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-705639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=pause-705639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T19_58_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:58:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-705639
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:59:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:59:44 +0000   Wed, 03 Jan 2024 19:58:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:59:44 +0000   Wed, 03 Jan 2024 19:58:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:59:44 +0000   Wed, 03 Jan 2024 19:58:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:59:44 +0000   Wed, 03 Jan 2024 19:58:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.234
	  Hostname:    pause-705639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 cea6d4794b7e42e7a563c3182cb164ac
	  System UUID:                cea6d479-4b7e-42e7-a563-c3182cb164ac
	  Boot ID:                    5cd4f7f6-adce-4f0c-a3c4-478567b0ed0a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-fkkp5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     102s
	  kube-system                 etcd-pause-705639                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         115s
	  kube-system                 kube-apiserver-pause-705639             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-controller-manager-pause-705639    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-proxy-lwbnd                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-scheduler-pause-705639             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node pause-705639 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node pause-705639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node pause-705639 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                115s                 kubelet          Node pause-705639 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node pause-705639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node pause-705639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node pause-705639 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s                 node-controller  Node pause-705639 event: Registered Node pause-705639 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)    kubelet          Node pause-705639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)    kubelet          Node pause-705639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 25s)    kubelet          Node pause-705639 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                   node-controller  Node pause-705639 event: Registered Node pause-705639 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062216] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.542663] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.100726] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.172876] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +6.415192] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.246403] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.139758] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.169836] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.136663] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.254092] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +10.849378] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[Jan 3 19:58] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[Jan 3 19:59] systemd-fstab-generator[2066]: Ignoring "noauto" for root device
	[  +0.226735] systemd-fstab-generator[2083]: Ignoring "noauto" for root device
	[  +0.031316] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.622802] systemd-fstab-generator[2238]: Ignoring "noauto" for root device
	[  +0.251550] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[  +0.557797] systemd-fstab-generator[2307]: Ignoring "noauto" for root device
	[ +20.391257] systemd-fstab-generator[3242]: Ignoring "noauto" for root device
	[  +7.244662] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198] <==
	{"level":"info","ts":"2024-01-03T19:59:23.480301Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:24.693698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-03T19:59:24.693775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-03T19:59:24.693817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 received MsgPreVoteResp from cd43c6a93c7b8f91 at term 2"}
	{"level":"info","ts":"2024-01-03T19:59:24.693835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became candidate at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:24.693847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 received MsgVoteResp from cd43c6a93c7b8f91 at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:24.693859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became leader at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:24.693869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cd43c6a93c7b8f91 elected leader cd43c6a93c7b8f91 at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:24.700495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:59:24.702146Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T19:59:24.702572Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:59:24.703847Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.234:2379"}
	{"level":"info","ts":"2024-01-03T19:59:24.70044Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"cd43c6a93c7b8f91","local-member-attributes":"{Name:pause-705639 ClientURLs:[https://192.168.83.234:2379]}","request-path":"/0/members/cd43c6a93c7b8f91/attributes","cluster-id":"29ca0c39bca1c057","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T19:59:24.707562Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T19:59:24.707632Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T19:59:36.924662Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-03T19:59:36.924818Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-705639","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.234:2380"],"advertise-client-urls":["https://192.168.83.234:2379"]}
	{"level":"warn","ts":"2024-01-03T19:59:36.924987Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T19:59:36.92509Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T19:59:36.927154Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.234:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T19:59:36.927199Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.234:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-03T19:59:36.928611Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cd43c6a93c7b8f91","current-leader-member-id":"cd43c6a93c7b8f91"}
	{"level":"info","ts":"2024-01-03T19:59:36.9328Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:36.932979Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:36.933036Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-705639","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.234:2380"],"advertise-client-urls":["https://192.168.83.234:2379"]}
	
	
	==> etcd [b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d] <==
	{"level":"info","ts":"2024-01-03T19:59:41.103227Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-01-03T19:59:41.103347Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T19:59:41.106951Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T19:59:41.106964Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T19:59:41.105697Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:41.108023Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:41.106187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 switched to configuration voters=(14790884031381344145)"}
	{"level":"info","ts":"2024-01-03T19:59:41.108158Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"29ca0c39bca1c057","local-member-id":"cd43c6a93c7b8f91","added-peer-id":"cd43c6a93c7b8f91","added-peer-peer-urls":["https://192.168.83.234:2380"]}
	{"level":"info","ts":"2024-01-03T19:59:41.108388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"29ca0c39bca1c057","local-member-id":"cd43c6a93c7b8f91","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:59:41.108476Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:59:42.041269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 is starting a new election at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:42.041332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:42.04136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 received MsgPreVoteResp from cd43c6a93c7b8f91 at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:42.041388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became candidate at term 4"}
	{"level":"info","ts":"2024-01-03T19:59:42.041394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 received MsgVoteResp from cd43c6a93c7b8f91 at term 4"}
	{"level":"info","ts":"2024-01-03T19:59:42.041403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became leader at term 4"}
	{"level":"info","ts":"2024-01-03T19:59:42.041409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cd43c6a93c7b8f91 elected leader cd43c6a93c7b8f91 at term 4"}
	{"level":"info","ts":"2024-01-03T19:59:42.04304Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"cd43c6a93c7b8f91","local-member-attributes":"{Name:pause-705639 ClientURLs:[https://192.168.83.234:2379]}","request-path":"/0/members/cd43c6a93c7b8f91/attributes","cluster-id":"29ca0c39bca1c057","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T19:59:42.043226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:59:42.044344Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.234:2379"}
	{"level":"info","ts":"2024-01-03T19:59:42.044918Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:59:42.045612Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T19:59:42.04565Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T19:59:42.045838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T19:59:44.138329Z","caller":"traceutil/trace.go:171","msg":"trace[844554984] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"109.850362ms","start":"2024-01-03T19:59:44.028459Z","end":"2024-01-03T19:59:44.13831Z","steps":["trace[844554984] 'process raft request'  (duration: 82.134231ms)","trace[844554984] 'compare'  (duration: 27.425284ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:00:03 up 2 min,  0 users,  load average: 1.73, 0.69, 0.25
	Linux pause-705639 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096] <==
	
	
	==> kube-apiserver [6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09] <==
	I0103 19:59:43.829621       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 19:59:43.877876       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0103 19:59:43.877940       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0103 19:59:43.976086       1 shared_informer.go:318] Caches are synced for configmaps
	I0103 19:59:43.976150       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0103 19:59:43.976347       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 19:59:43.979251       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0103 19:59:43.984856       1 aggregator.go:166] initial CRD sync complete...
	I0103 19:59:43.984934       1 autoregister_controller.go:141] Starting autoregister controller
	I0103 19:59:43.984966       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0103 19:59:43.985011       1 cache.go:39] Caches are synced for autoregister controller
	I0103 19:59:43.990724       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 19:59:44.004450       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0103 19:59:44.035270       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0103 19:59:44.035329       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0103 19:59:44.036296       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0103 19:59:44.041350       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 19:59:44.873934       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0103 19:59:46.074152       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0103 19:59:46.092891       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0103 19:59:46.170831       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0103 19:59:46.209058       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 19:59:46.228931       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0103 19:59:56.870234       1 controller.go:624] quota admission added evaluator for: endpoints
	I0103 19:59:56.876735       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff] <==
	I0103 19:59:56.696387       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0103 19:59:56.696606       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-705639"
	I0103 19:59:56.696711       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0103 19:59:56.696779       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0103 19:59:56.697389       1 taint_manager.go:210] "Sending events to api server"
	I0103 19:59:56.697915       1 event.go:307] "Event occurred" object="pause-705639" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-705639 event: Registered Node pause-705639 in Controller"
	I0103 19:59:56.709888       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0103 19:59:56.710014       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0103 19:59:56.712622       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0103 19:59:56.713782       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0103 19:59:56.717909       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0103 19:59:56.726387       1 shared_informer.go:318] Caches are synced for HPA
	I0103 19:59:56.727867       1 shared_informer.go:318] Caches are synced for daemon sets
	I0103 19:59:56.730666       1 shared_informer.go:318] Caches are synced for job
	I0103 19:59:56.735931       1 shared_informer.go:318] Caches are synced for TTL
	I0103 19:59:56.741421       1 shared_informer.go:318] Caches are synced for attach detach
	I0103 19:59:56.748735       1 shared_informer.go:318] Caches are synced for PV protection
	I0103 19:59:56.808161       1 shared_informer.go:318] Caches are synced for resource quota
	I0103 19:59:56.809402       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0103 19:59:56.827843       1 shared_informer.go:318] Caches are synced for resource quota
	I0103 19:59:56.853939       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0103 19:59:56.857676       1 shared_informer.go:318] Caches are synced for endpoint
	I0103 19:59:57.268463       1 shared_informer.go:318] Caches are synced for garbage collector
	I0103 19:59:57.273859       1 shared_informer.go:318] Caches are synced for garbage collector
	I0103 19:59:57.273968       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424] <==
	I0103 19:59:23.035149       1 serving.go:348] Generated self-signed cert in-memory
	I0103 19:59:24.057853       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0103 19:59:24.057909       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:59:24.060073       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0103 19:59:24.061094       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0103 19:59:24.061690       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0103 19:59:24.061908       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0103 19:59:34.064472       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.234:8443/healthz\": dial tcp 192.168.83.234:8443: connect: connection refused"
	
	
	==> kube-proxy [903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3] <==
	
	
	==> kube-proxy [c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d] <==
	I0103 19:59:45.465668       1 server_others.go:69] "Using iptables proxy"
	I0103 19:59:45.490440       1 node.go:141] Successfully retrieved node IP: 192.168.83.234
	I0103 19:59:45.553154       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0103 19:59:45.553257       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 19:59:45.561768       1 server_others.go:152] "Using iptables Proxier"
	I0103 19:59:45.561907       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 19:59:45.562326       1 server.go:846] "Version info" version="v1.28.4"
	I0103 19:59:45.562366       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:59:45.563895       1 config.go:188] "Starting service config controller"
	I0103 19:59:45.563975       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 19:59:45.564021       1 config.go:97] "Starting endpoint slice config controller"
	I0103 19:59:45.564054       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 19:59:45.564800       1 config.go:315] "Starting node config controller"
	I0103 19:59:45.564892       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 19:59:45.664601       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 19:59:45.664650       1 shared_informer.go:318] Caches are synced for service config
	I0103 19:59:45.665098       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0] <==
	E0103 19:59:32.270808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.83.234:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.326665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.83.234:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.326728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.83.234:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.353759       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.83.234:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.353826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.83.234:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.436254       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.83.234:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.436326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.83.234:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.508094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.234:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.508166       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.234:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.649000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.83.234:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.649070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.83.234:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.738224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.83.234:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.738255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.83.234:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.886637       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.83.234:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.886779       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.83.234:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:33.062326       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.83.234:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:33.062577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.83.234:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:33.792745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.83.234:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:33.792864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.83.234:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:34.162398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.83.234:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:34.162565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.83.234:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:34.385411       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.83.234:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:34.385651       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.83.234:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:37.081006       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0103 19:59:37.081714       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b] <==
	I0103 19:59:42.051057       1 serving.go:348] Generated self-signed cert in-memory
	W0103 19:59:43.908414       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 19:59:43.908497       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 19:59:43.908549       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 19:59:43.908558       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 19:59:43.991997       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0103 19:59:43.992086       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:59:43.993620       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 19:59:43.993702       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 19:59:43.994591       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 19:59:43.996022       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 19:59:44.094130       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 19:57:30 UTC, ends at Wed 2024-01-03 20:00:03 UTC. --
	Jan 03 19:59:39 pause-705639 kubelet[3248]: E0103 19:59:39.475598    3248 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.234:8443: connect: connection refused" node="pause-705639"
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.159687    3248 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-705639?timeout=10s\": dial tcp 192.168.83.234:8443: connect: connection refused" interval="1.6s"
	Jan 03 19:59:40 pause-705639 kubelet[3248]: W0103 19:59:40.166340    3248 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-705639&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.166431    3248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-705639&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: W0103 19:59:40.171977    3248 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.172068    3248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: W0103 19:59:40.260986    3248 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.261085    3248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: I0103 19:59:40.277279    3248 kubelet_node_status.go:70] "Attempting to register node" node="pause-705639"
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.277780    3248 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.234:8443: connect: connection refused" node="pause-705639"
	Jan 03 19:59:40 pause-705639 kubelet[3248]: W0103 19:59:40.305801    3248 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.305878    3248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:41 pause-705639 kubelet[3248]: I0103 19:59:41.880214    3248 kubelet_node_status.go:70] "Attempting to register node" node="pause-705639"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.025111    3248 kubelet_node_status.go:108] "Node was previously registered" node="pause-705639"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.025235    3248 kubelet_node_status.go:73] "Successfully registered node" node="pause-705639"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.029284    3248 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.031026    3248 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.740634    3248 apiserver.go:52] "Watching apiserver"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.747674    3248 topology_manager.go:215] "Topology Admit Handler" podUID="2dbfaa3d-dc71-48ee-9746-357990a3b6b5" podNamespace="kube-system" podName="kube-proxy-lwbnd"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.747971    3248 topology_manager.go:215] "Topology Admit Handler" podUID="9226d155-0c50-444f-9899-7c425b5ea32e" podNamespace="kube-system" podName="coredns-5dd5756b68-fkkp5"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.753991    3248 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.796042    3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dbfaa3d-dc71-48ee-9746-357990a3b6b5-xtables-lock\") pod \"kube-proxy-lwbnd\" (UID: \"2dbfaa3d-dc71-48ee-9746-357990a3b6b5\") " pod="kube-system/kube-proxy-lwbnd"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.796118    3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dbfaa3d-dc71-48ee-9746-357990a3b6b5-lib-modules\") pod \"kube-proxy-lwbnd\" (UID: \"2dbfaa3d-dc71-48ee-9746-357990a3b6b5\") " pod="kube-system/kube-proxy-lwbnd"
	Jan 03 19:59:45 pause-705639 kubelet[3248]: I0103 19:59:45.049428    3248 scope.go:117] "RemoveContainer" containerID="903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3"
	Jan 03 19:59:45 pause-705639 kubelet[3248]: I0103 19:59:45.050196    3248 scope.go:117] "RemoveContainer" containerID="101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-705639 -n pause-705639
helpers_test.go:261: (dbg) Run:  kubectl --context pause-705639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-705639 -n pause-705639
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-705639 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-705639 logs -n 25: (1.398543745s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | status kubelet --all --full                          |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat kubelet --no-pager                               |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo journalctl                       | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | -xeu kubelet --all --full                            |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | status docker --all --full                           |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo docker                           | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo                                  | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo cat                              | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo containerd                       | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo systemctl                        | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-719541 sudo find                             | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| delete  | -p auto-719541                                       | auto-719541    | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	| start   | -p calico-719541 --memory=3072                       | calico-719541  | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-719541 pgrep -a                           | kindnet-719541 | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	|         | kubelet                                              |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:59:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:59:50.094780   48849 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:59:50.094894   48849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:59:50.094899   48849 out.go:309] Setting ErrFile to fd 2...
	I0103 19:59:50.094903   48849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:59:50.095140   48849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:59:50.095703   48849 out.go:303] Setting JSON to false
	I0103 19:59:50.096733   48849 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6137,"bootTime":1704305853,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:59:50.096801   48849 start.go:138] virtualization: kvm guest
	I0103 19:59:50.099239   48849 out.go:177] * [calico-719541] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:59:50.100545   48849 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:59:50.100552   48849 notify.go:220] Checking for updates...
	I0103 19:59:50.103406   48849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:59:50.104757   48849 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:59:50.106025   48849 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:59:50.107290   48849 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:59:50.108536   48849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:59:50.110211   48849 config.go:182] Loaded profile config "kindnet-719541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:59:50.110361   48849 config.go:182] Loaded profile config "pause-705639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:59:50.110430   48849 config.go:182] Loaded profile config "stopped-upgrade-857735": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0103 19:59:50.110533   48849 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:59:50.148426   48849 out.go:177] * Using the kvm2 driver based on user configuration
	I0103 19:59:50.149951   48849 start.go:298] selected driver: kvm2
	I0103 19:59:50.149964   48849 start.go:902] validating driver "kvm2" against <nil>
	I0103 19:59:50.149980   48849 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:59:50.150755   48849 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:59:50.150861   48849 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 19:59:50.165635   48849 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 19:59:50.165691   48849 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 19:59:50.165886   48849 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 19:59:50.165943   48849 cni.go:84] Creating CNI manager for "calico"
	I0103 19:59:50.165956   48849 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I0103 19:59:50.165965   48849 start_flags.go:323] config:
	{Name:calico-719541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-719541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:59:50.166095   48849 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:59:50.168050   48849 out.go:177] * Starting control plane node calico-719541 in cluster calico-719541
	I0103 19:59:50.169620   48849 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:59:50.169656   48849 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 19:59:50.169665   48849 cache.go:56] Caching tarball of preloaded images
	I0103 19:59:50.169777   48849 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:59:50.169792   48849 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:59:50.169918   48849 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/config.json ...
	I0103 19:59:50.169942   48849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/config.json: {Name:mkbb1a2e8d8fc93b31f76881d0e7f9131f3b648a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:59:50.170069   48849 start.go:365] acquiring machines lock for calico-719541: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 19:59:50.170096   48849 start.go:369] acquired machines lock for "calico-719541" in 14.82µs
	I0103 19:59:50.170119   48849 start.go:93] Provisioning new machine with config: &{Name:calico-719541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:calico-719541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:59:50.170190   48849 start.go:125] createHost starting for "" (driver="kvm2")
	I0103 19:59:48.274096   46928 pod_ready.go:102] pod "etcd-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:50.772593   46928 pod_ready.go:102] pod "etcd-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:51.275711   46928 pod_ready.go:92] pod "etcd-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:51.275745   46928 pod_ready.go:81] duration metric: took 5.010611315s waiting for pod "etcd-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:51.275757   46928 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:50.171940   48849 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0103 19:59:50.172060   48849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:59:50.172122   48849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:59:50.186589   48849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39833
	I0103 19:59:50.187029   48849 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:59:50.187569   48849 main.go:141] libmachine: Using API Version  1
	I0103 19:59:50.187593   48849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:59:50.187945   48849 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:59:50.188129   48849 main.go:141] libmachine: (calico-719541) Calling .GetMachineName
	I0103 19:59:50.188250   48849 main.go:141] libmachine: (calico-719541) Calling .DriverName
	I0103 19:59:50.188379   48849 start.go:159] libmachine.API.Create for "calico-719541" (driver="kvm2")
	I0103 19:59:50.188414   48849 client.go:168] LocalClient.Create starting
	I0103 19:59:50.188455   48849 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem
	I0103 19:59:50.188489   48849 main.go:141] libmachine: Decoding PEM data...
	I0103 19:59:50.188505   48849 main.go:141] libmachine: Parsing certificate...
	I0103 19:59:50.188556   48849 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem
	I0103 19:59:50.188574   48849 main.go:141] libmachine: Decoding PEM data...
	I0103 19:59:50.188587   48849 main.go:141] libmachine: Parsing certificate...
	I0103 19:59:50.188602   48849 main.go:141] libmachine: Running pre-create checks...
	I0103 19:59:50.188610   48849 main.go:141] libmachine: (calico-719541) Calling .PreCreateCheck
	I0103 19:59:50.188901   48849 main.go:141] libmachine: (calico-719541) Calling .GetConfigRaw
	I0103 19:59:50.189260   48849 main.go:141] libmachine: Creating machine...
	I0103 19:59:50.189273   48849 main.go:141] libmachine: (calico-719541) Calling .Create
	I0103 19:59:50.189433   48849 main.go:141] libmachine: (calico-719541) Creating KVM machine...
	I0103 19:59:50.190501   48849 main.go:141] libmachine: (calico-719541) DBG | found existing default KVM network
	I0103 19:59:50.191941   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.191795   48872 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f840}
	I0103 19:59:50.197011   48849 main.go:141] libmachine: (calico-719541) DBG | trying to create private KVM network mk-calico-719541 192.168.39.0/24...
	I0103 19:59:50.276380   48849 main.go:141] libmachine: (calico-719541) DBG | private KVM network mk-calico-719541 192.168.39.0/24 created
	I0103 19:59:50.276428   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.276287   48872 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:59:50.276443   48849 main.go:141] libmachine: (calico-719541) Setting up store path in /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541 ...
	I0103 19:59:50.276464   48849 main.go:141] libmachine: (calico-719541) Building disk image from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 19:59:50.276486   48849 main.go:141] libmachine: (calico-719541) Downloading /home/jenkins/minikube-integration/17885-9609/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0103 19:59:50.506157   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.506007   48872 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541/id_rsa...
	I0103 19:59:50.608295   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.608186   48872 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541/calico-719541.rawdisk...
	I0103 19:59:50.608322   48849 main.go:141] libmachine: (calico-719541) DBG | Writing magic tar header
	I0103 19:59:50.608335   48849 main.go:141] libmachine: (calico-719541) DBG | Writing SSH key tar header
	I0103 19:59:50.608344   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:50.608308   48872 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541 ...
	I0103 19:59:50.608432   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541
	I0103 19:59:50.608474   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines
	I0103 19:59:50.608492   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541 (perms=drwx------)
	I0103 19:59:50.608510   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines (perms=drwxr-xr-x)
	I0103 19:59:50.608525   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube (perms=drwxr-xr-x)
	I0103 19:59:50.608540   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609 (perms=drwxrwxr-x)
	I0103 19:59:50.608550   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:59:50.608557   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0103 19:59:50.608568   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609
	I0103 19:59:50.608582   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0103 19:59:50.608596   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home/jenkins
	I0103 19:59:50.608607   48849 main.go:141] libmachine: (calico-719541) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0103 19:59:50.608623   48849 main.go:141] libmachine: (calico-719541) Creating domain...
	I0103 19:59:50.608674   48849 main.go:141] libmachine: (calico-719541) DBG | Checking permissions on dir: /home
	I0103 19:59:50.608694   48849 main.go:141] libmachine: (calico-719541) DBG | Skipping /home - not owner
	I0103 19:59:50.609659   48849 main.go:141] libmachine: (calico-719541) define libvirt domain using xml: 
	I0103 19:59:50.609683   48849 main.go:141] libmachine: (calico-719541) <domain type='kvm'>
	I0103 19:59:50.609694   48849 main.go:141] libmachine: (calico-719541)   <name>calico-719541</name>
	I0103 19:59:50.609706   48849 main.go:141] libmachine: (calico-719541)   <memory unit='MiB'>3072</memory>
	I0103 19:59:50.609720   48849 main.go:141] libmachine: (calico-719541)   <vcpu>2</vcpu>
	I0103 19:59:50.609732   48849 main.go:141] libmachine: (calico-719541)   <features>
	I0103 19:59:50.609741   48849 main.go:141] libmachine: (calico-719541)     <acpi/>
	I0103 19:59:50.609754   48849 main.go:141] libmachine: (calico-719541)     <apic/>
	I0103 19:59:50.609767   48849 main.go:141] libmachine: (calico-719541)     <pae/>
	I0103 19:59:50.609777   48849 main.go:141] libmachine: (calico-719541)     
	I0103 19:59:50.609799   48849 main.go:141] libmachine: (calico-719541)   </features>
	I0103 19:59:50.609820   48849 main.go:141] libmachine: (calico-719541)   <cpu mode='host-passthrough'>
	I0103 19:59:50.609838   48849 main.go:141] libmachine: (calico-719541)   
	I0103 19:59:50.609843   48849 main.go:141] libmachine: (calico-719541)   </cpu>
	I0103 19:59:50.609849   48849 main.go:141] libmachine: (calico-719541)   <os>
	I0103 19:59:50.609864   48849 main.go:141] libmachine: (calico-719541)     <type>hvm</type>
	I0103 19:59:50.609873   48849 main.go:141] libmachine: (calico-719541)     <boot dev='cdrom'/>
	I0103 19:59:50.609878   48849 main.go:141] libmachine: (calico-719541)     <boot dev='hd'/>
	I0103 19:59:50.609884   48849 main.go:141] libmachine: (calico-719541)     <bootmenu enable='no'/>
	I0103 19:59:50.609891   48849 main.go:141] libmachine: (calico-719541)   </os>
	I0103 19:59:50.609897   48849 main.go:141] libmachine: (calico-719541)   <devices>
	I0103 19:59:50.609905   48849 main.go:141] libmachine: (calico-719541)     <disk type='file' device='cdrom'>
	I0103 19:59:50.609918   48849 main.go:141] libmachine: (calico-719541)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541/boot2docker.iso'/>
	I0103 19:59:50.609928   48849 main.go:141] libmachine: (calico-719541)       <target dev='hdc' bus='scsi'/>
	I0103 19:59:50.609941   48849 main.go:141] libmachine: (calico-719541)       <readonly/>
	I0103 19:59:50.609958   48849 main.go:141] libmachine: (calico-719541)     </disk>
	I0103 19:59:50.609975   48849 main.go:141] libmachine: (calico-719541)     <disk type='file' device='disk'>
	I0103 19:59:50.609989   48849 main.go:141] libmachine: (calico-719541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0103 19:59:50.610028   48849 main.go:141] libmachine: (calico-719541)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/calico-719541/calico-719541.rawdisk'/>
	I0103 19:59:50.610063   48849 main.go:141] libmachine: (calico-719541)       <target dev='hda' bus='virtio'/>
	I0103 19:59:50.610079   48849 main.go:141] libmachine: (calico-719541)     </disk>
	I0103 19:59:50.610091   48849 main.go:141] libmachine: (calico-719541)     <interface type='network'>
	I0103 19:59:50.610103   48849 main.go:141] libmachine: (calico-719541)       <source network='mk-calico-719541'/>
	I0103 19:59:50.610115   48849 main.go:141] libmachine: (calico-719541)       <model type='virtio'/>
	I0103 19:59:50.610125   48849 main.go:141] libmachine: (calico-719541)     </interface>
	I0103 19:59:50.610134   48849 main.go:141] libmachine: (calico-719541)     <interface type='network'>
	I0103 19:59:50.610145   48849 main.go:141] libmachine: (calico-719541)       <source network='default'/>
	I0103 19:59:50.610158   48849 main.go:141] libmachine: (calico-719541)       <model type='virtio'/>
	I0103 19:59:50.610172   48849 main.go:141] libmachine: (calico-719541)     </interface>
	I0103 19:59:50.610184   48849 main.go:141] libmachine: (calico-719541)     <serial type='pty'>
	I0103 19:59:50.610194   48849 main.go:141] libmachine: (calico-719541)       <target port='0'/>
	I0103 19:59:50.610205   48849 main.go:141] libmachine: (calico-719541)     </serial>
	I0103 19:59:50.610217   48849 main.go:141] libmachine: (calico-719541)     <console type='pty'>
	I0103 19:59:50.610235   48849 main.go:141] libmachine: (calico-719541)       <target type='serial' port='0'/>
	I0103 19:59:50.610273   48849 main.go:141] libmachine: (calico-719541)     </console>
	I0103 19:59:50.610292   48849 main.go:141] libmachine: (calico-719541)     <rng model='virtio'>
	I0103 19:59:50.610303   48849 main.go:141] libmachine: (calico-719541)       <backend model='random'>/dev/random</backend>
	I0103 19:59:50.610312   48849 main.go:141] libmachine: (calico-719541)     </rng>
	I0103 19:59:50.610318   48849 main.go:141] libmachine: (calico-719541)     
	I0103 19:59:50.610335   48849 main.go:141] libmachine: (calico-719541)     
	I0103 19:59:50.610356   48849 main.go:141] libmachine: (calico-719541)   </devices>
	I0103 19:59:50.610370   48849 main.go:141] libmachine: (calico-719541) </domain>
	I0103 19:59:50.610380   48849 main.go:141] libmachine: (calico-719541) 
	I0103 19:59:50.614719   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:be:c8:76 in network default
	I0103 19:59:50.615260   48849 main.go:141] libmachine: (calico-719541) Ensuring networks are active...
	I0103 19:59:50.615293   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:50.615986   48849 main.go:141] libmachine: (calico-719541) Ensuring network default is active
	I0103 19:59:50.616274   48849 main.go:141] libmachine: (calico-719541) Ensuring network mk-calico-719541 is active
	I0103 19:59:50.616779   48849 main.go:141] libmachine: (calico-719541) Getting domain xml...
	I0103 19:59:50.617373   48849 main.go:141] libmachine: (calico-719541) Creating domain...
	I0103 19:59:51.880099   48849 main.go:141] libmachine: (calico-719541) Waiting to get IP...
	I0103 19:59:51.880756   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:51.881201   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:51.881238   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:51.881187   48872 retry.go:31] will retry after 289.795257ms: waiting for machine to come up
	I0103 19:59:52.172693   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:52.173261   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:52.173291   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:52.173213   48872 retry.go:31] will retry after 294.821334ms: waiting for machine to come up
	I0103 19:59:52.469691   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:52.470135   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:52.470173   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:52.470092   48872 retry.go:31] will retry after 375.646278ms: waiting for machine to come up
	I0103 19:59:52.847532   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:52.848107   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:52.848135   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:52.848065   48872 retry.go:31] will retry after 382.660652ms: waiting for machine to come up
	I0103 19:59:53.232569   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:53.233057   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:53.233088   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:53.233015   48872 retry.go:31] will retry after 522.321303ms: waiting for machine to come up
	I0103 19:59:53.756545   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:53.757098   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:53.757131   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:53.757041   48872 retry.go:31] will retry after 694.331165ms: waiting for machine to come up
	I0103 19:59:54.453074   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:54.453615   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:54.453644   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:54.453533   48872 retry.go:31] will retry after 1.049710109s: waiting for machine to come up
	I0103 19:59:53.283961   46928 pod_ready.go:102] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:55.285816   46928 pod_ready.go:102] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:57.784820   46928 pod_ready.go:102] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"False"
	I0103 19:59:58.791842   46928 pod_ready.go:92] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:58.791866   46928 pod_ready.go:81] duration metric: took 7.51610105s waiting for pod "kube-apiserver-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.791878   46928 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.803930   46928 pod_ready.go:92] pod "kube-controller-manager-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:58.803950   46928 pod_ready.go:81] duration metric: took 12.065134ms waiting for pod "kube-controller-manager-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.803959   46928 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lwbnd" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.812295   46928 pod_ready.go:92] pod "kube-proxy-lwbnd" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:58.812315   46928 pod_ready.go:81] duration metric: took 8.351094ms waiting for pod "kube-proxy-lwbnd" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.812323   46928 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.820307   46928 pod_ready.go:92] pod "kube-scheduler-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:58.820328   46928 pod_ready.go:81] duration metric: took 7.998404ms waiting for pod "kube-scheduler-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:58.820338   46928 pod_ready.go:38] duration metric: took 12.570692536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:59:58.820356   46928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 19:59:58.832675   46928 ops.go:34] apiserver oom_adj: -16
	I0103 19:59:58.832696   46928 kubeadm.go:640] restartCluster took 37.373494615s
	I0103 19:59:58.832705   46928 kubeadm.go:406] StartCluster complete in 37.550826018s
	I0103 19:59:58.832725   46928 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:59:58.832813   46928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:59:58.833695   46928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:59:58.833932   46928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 19:59:58.833965   46928 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 19:59:58.835927   46928 out.go:177] * Enabled addons: 
	I0103 19:59:58.834180   46928 config.go:182] Loaded profile config "pause-705639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:59:58.834752   46928 kapi.go:59] client config for pause-705639: &rest.Config{Host:"https://192.168.83.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/profiles/pause-705639/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:59:58.837508   46928 addons.go:508] enable addons completed in 3.551926ms: enabled=[]
	I0103 19:59:58.840765   46928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-705639" context rescaled to 1 replicas
	I0103 19:59:58.840806   46928 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.234 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:59:58.842635   46928 out.go:177] * Verifying Kubernetes components...
	I0103 19:59:55.506623   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:55.507135   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:55.507160   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:55.507071   48872 retry.go:31] will retry after 1.129495665s: waiting for machine to come up
	I0103 19:59:56.638537   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:56.639029   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:56.639061   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:56.638973   48872 retry.go:31] will retry after 1.563343867s: waiting for machine to come up
	I0103 19:59:58.203747   48849 main.go:141] libmachine: (calico-719541) DBG | domain calico-719541 has defined MAC address 52:54:00:c3:db:f8 in network mk-calico-719541
	I0103 19:59:58.204209   48849 main.go:141] libmachine: (calico-719541) DBG | unable to find current IP address of domain calico-719541 in network mk-calico-719541
	I0103 19:59:58.204239   48849 main.go:141] libmachine: (calico-719541) DBG | I0103 19:59:58.204179   48872 retry.go:31] will retry after 2.070449561s: waiting for machine to come up
	I0103 19:59:58.843952   46928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:59:58.958681   46928 node_ready.go:35] waiting up to 6m0s for node "pause-705639" to be "Ready" ...
	I0103 19:59:58.959089   46928 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 19:59:58.964497   46928 node_ready.go:49] node "pause-705639" has status "Ready":"True"
	I0103 19:59:58.964529   46928 node_ready.go:38] duration metric: took 5.770132ms waiting for node "pause-705639" to be "Ready" ...
	I0103 19:59:58.964540   46928 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:59:58.971047   46928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fkkp5" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.182776   46928 pod_ready.go:92] pod "coredns-5dd5756b68-fkkp5" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:59.182806   46928 pod_ready.go:81] duration metric: took 211.687047ms waiting for pod "coredns-5dd5756b68-fkkp5" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.182821   46928 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.581989   46928 pod_ready.go:92] pod "etcd-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:59.582014   46928 pod_ready.go:81] duration metric: took 399.18598ms waiting for pod "etcd-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.582024   46928 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.982161   46928 pod_ready.go:92] pod "kube-apiserver-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 19:59:59.982189   46928 pod_ready.go:81] duration metric: took 400.157537ms waiting for pod "kube-apiserver-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 19:59:59.982202   46928 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:00.381104   46928 pod_ready.go:92] pod "kube-controller-manager-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 20:00:00.381135   46928 pod_ready.go:81] duration metric: took 398.923061ms waiting for pod "kube-controller-manager-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:00.381150   46928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwbnd" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:00.781832   46928 pod_ready.go:92] pod "kube-proxy-lwbnd" in "kube-system" namespace has status "Ready":"True"
	I0103 20:00:00.781862   46928 pod_ready.go:81] duration metric: took 400.70431ms waiting for pod "kube-proxy-lwbnd" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:00.781873   46928 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:01.182444   46928 pod_ready.go:92] pod "kube-scheduler-pause-705639" in "kube-system" namespace has status "Ready":"True"
	I0103 20:00:01.182470   46928 pod_ready.go:81] duration metric: took 400.591162ms waiting for pod "kube-scheduler-pause-705639" in "kube-system" namespace to be "Ready" ...
	I0103 20:00:01.182479   46928 pod_ready.go:38] duration metric: took 2.217928363s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:00:01.182492   46928 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:00:01.182561   46928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:00:01.196343   46928 api_server.go:72] duration metric: took 2.35550047s to wait for apiserver process to appear ...
	I0103 20:00:01.196386   46928 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:00:01.196413   46928 api_server.go:253] Checking apiserver healthz at https://192.168.83.234:8443/healthz ...
	I0103 20:00:01.204537   46928 api_server.go:279] https://192.168.83.234:8443/healthz returned 200:
	ok
	I0103 20:00:01.206442   46928 api_server.go:141] control plane version: v1.28.4
	I0103 20:00:01.206472   46928 api_server.go:131] duration metric: took 10.07776ms to wait for apiserver health ...
	I0103 20:00:01.206484   46928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:00:01.385721   46928 system_pods.go:59] 6 kube-system pods found
	I0103 20:00:01.385750   46928 system_pods.go:61] "coredns-5dd5756b68-fkkp5" [9226d155-0c50-444f-9899-7c425b5ea32e] Running
	I0103 20:00:01.385755   46928 system_pods.go:61] "etcd-pause-705639" [7cbb84a5-dfc6-4150-ba57-0a6c00c22a63] Running
	I0103 20:00:01.385759   46928 system_pods.go:61] "kube-apiserver-pause-705639" [388802b7-66b9-4b93-90b6-69533f6808f3] Running
	I0103 20:00:01.385764   46928 system_pods.go:61] "kube-controller-manager-pause-705639" [44057709-897c-4c0c-a6a4-477a83fdb68f] Running
	I0103 20:00:01.385767   46928 system_pods.go:61] "kube-proxy-lwbnd" [2dbfaa3d-dc71-48ee-9746-357990a3b6b5] Running
	I0103 20:00:01.385771   46928 system_pods.go:61] "kube-scheduler-pause-705639" [f5b3ab0d-2f5b-4cb9-a85a-0b007edf09fa] Running
	I0103 20:00:01.385777   46928 system_pods.go:74] duration metric: took 179.286866ms to wait for pod list to return data ...
	I0103 20:00:01.385784   46928 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:00:01.581339   46928 default_sa.go:45] found service account: "default"
	I0103 20:00:01.581368   46928 default_sa.go:55] duration metric: took 195.579067ms for default service account to be created ...
	I0103 20:00:01.581380   46928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:00:01.785084   46928 system_pods.go:86] 6 kube-system pods found
	I0103 20:00:01.785120   46928 system_pods.go:89] "coredns-5dd5756b68-fkkp5" [9226d155-0c50-444f-9899-7c425b5ea32e] Running
	I0103 20:00:01.785129   46928 system_pods.go:89] "etcd-pause-705639" [7cbb84a5-dfc6-4150-ba57-0a6c00c22a63] Running
	I0103 20:00:01.785136   46928 system_pods.go:89] "kube-apiserver-pause-705639" [388802b7-66b9-4b93-90b6-69533f6808f3] Running
	I0103 20:00:01.785143   46928 system_pods.go:89] "kube-controller-manager-pause-705639" [44057709-897c-4c0c-a6a4-477a83fdb68f] Running
	I0103 20:00:01.785149   46928 system_pods.go:89] "kube-proxy-lwbnd" [2dbfaa3d-dc71-48ee-9746-357990a3b6b5] Running
	I0103 20:00:01.785158   46928 system_pods.go:89] "kube-scheduler-pause-705639" [f5b3ab0d-2f5b-4cb9-a85a-0b007edf09fa] Running
	I0103 20:00:01.785167   46928 system_pods.go:126] duration metric: took 203.780978ms to wait for k8s-apps to be running ...
	I0103 20:00:01.785180   46928 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:00:01.785233   46928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:00:01.802961   46928 system_svc.go:56] duration metric: took 17.76927ms WaitForService to wait for kubelet.
	I0103 20:00:01.802989   46928 kubeadm.go:581] duration metric: took 2.962157236s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:00:01.803008   46928 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:00:01.982111   46928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:00:01.982147   46928 node_conditions.go:123] node cpu capacity is 2
	I0103 20:00:01.982161   46928 node_conditions.go:105] duration metric: took 179.147132ms to run NodePressure ...
	I0103 20:00:01.982175   46928 start.go:228] waiting for startup goroutines ...
	I0103 20:00:01.982184   46928 start.go:233] waiting for cluster config update ...
	I0103 20:00:01.982194   46928 start.go:242] writing updated cluster config ...
	I0103 20:00:01.982569   46928 ssh_runner.go:195] Run: rm -f paused
	I0103 20:00:02.044628   46928 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:00:02.047096   46928 out.go:177] * Done! kubectl is now configured to use "pause-705639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 19:57:30 UTC, ends at Wed 2024-01-03 20:00:05 UTC. --
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.023313008Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&PodSandboxMetadata{Name:kube-proxy-lwbnd,Uid:2dbfaa3d-dc71-48ee-9746-357990a3b6b5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1704311971191197477,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:58:21.394775153Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-fkkp5,Uid:9226d155-0c50-444f-9899-7c425b5ea32e,N
amespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959857159634,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T19:58:21.719900704Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-705639,Uid:cd9c59c6d56232bfd7011bd5817fde97,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959851978510,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: cd9c59c6d56232bfd7011bd5817fde97,kubernetes.io/config.seen: 2024-01-03T19:58:08.085017950Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-705639,Uid:85e9694dda2f6e2035142215b093a9d6,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959773622259,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 85e9694dda2f6e2035142215b093a9d6,kubernetes.io/config.seen: 2024-01-03T19:58:08.085011331Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&PodSan
dboxMetadata{Name:etcd-pause-705639,Uid:1bf83f638fa3405cd004c68f3ff4d378,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959728110767,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.234:2379,kubernetes.io/config.hash: 1bf83f638fa3405cd004c68f3ff4d378,kubernetes.io/config.seen: 2024-01-03T19:58:08.085015373Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-705639,Uid:b0173835980ecc42799c1960492dad7b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1704311959711213566,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.234:8443,kubernetes.io/config.hash: b0173835980ecc42799c1960492dad7b,kubernetes.io/config.seen: 2024-01-03T19:58:08.085016821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-705639,Uid:b0173835980ecc42799c1960492dad7b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1704311956739821209,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernete
s.io/kube-apiserver.advertise-address.endpoint: 192.168.83.234:8443,kubernetes.io/config.hash: b0173835980ecc42799c1960492dad7b,kubernetes.io/config.seen: 2024-01-03T19:58:08.085016821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=9344d37a-f638-4d9f-9b87-e22cde65c85f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.025280608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c48b7792-957f-4f01-b0a3-ff9f3d74b2ae name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.025354143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c48b7792-957f-4f01-b0a3-ff9f3d74b2ae name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.025773624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704311985082791945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704311985109015257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash: cb6296c2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704311979480811724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694
dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704311979448310234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704311979505632130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09,PodSandboxId:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704311979423584507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,},Annotations:map[string]s
tring{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704311971519238479,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash:
cb6296c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704311962081684534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704311961469083325,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]string{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704311961052853887,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_EXITED,CreatedAt:1704311961121112356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096,PodSandboxId:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704311957809683313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835
980ecc42799c1960492dad7b,},Annotations:map[string]string{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c48b7792-957f-4f01-b0a3-ff9f3d74b2ae name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.068992927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cee43a3c-b26b-415a-b716-5869be5dd28a name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.069077640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cee43a3c-b26b-415a-b716-5869be5dd28a name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.070498928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=614a9b90-d13b-4d69-ae9e-613a399f20ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.070912835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704312005070894241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=614a9b90-d13b-4d69-ae9e-613a399f20ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.071575234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=093d05e2-61a3-4f55-8227-2fdc217cff80 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.071644341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=093d05e2-61a3-4f55-8227-2fdc217cff80 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.071938370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704311985082791945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704311985109015257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash: cb6296c2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704311979480811724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694
dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704311979448310234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704311979505632130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09,PodSandboxId:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704311979423584507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,},Annotations:map[string]s
tring{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704311971519238479,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash:
cb6296c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704311962081684534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704311961469083325,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]string{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704311961052853887,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_EXITED,CreatedAt:1704311961121112356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096,PodSandboxId:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704311957809683313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835
980ecc42799c1960492dad7b,},Annotations:map[string]string{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=093d05e2-61a3-4f55-8227-2fdc217cff80 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.113876570Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=905e48c8-c86e-4958-aa4f-e8e863fc03f1 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.113955536Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=905e48c8-c86e-4958-aa4f-e8e863fc03f1 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.115336566Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e672e919-f99c-4f4c-9972-a415ad12c974 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.115895761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704312005115879776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=e672e919-f99c-4f4c-9972-a415ad12c974 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.116622833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bab3822d-4aa8-4cd2-9ca9-cba7d177846a name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.116692431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bab3822d-4aa8-4cd2-9ca9-cba7d177846a name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.116942021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704311985082791945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704311985109015257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash: cb6296c2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704311979480811724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694
dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704311979448310234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704311979505632130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09,PodSandboxId:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704311979423584507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,},Annotations:map[string]s
tring{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704311971519238479,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash:
cb6296c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704311962081684534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704311961469083325,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]string{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704311961052853887,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_EXITED,CreatedAt:1704311961121112356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096,PodSandboxId:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704311957809683313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835
980ecc42799c1960492dad7b,},Annotations:map[string]string{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bab3822d-4aa8-4cd2-9ca9-cba7d177846a name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.160945251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=985cb32d-e39c-46f6-8a48-f21e55b15475 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.161029947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=985cb32d-e39c-46f6-8a48-f21e55b15475 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.162408216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=205d5039-5707-416d-a535-198470419c12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.162843541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704312005162827915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=205d5039-5707-416d-a535-198470419c12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.163398219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8ce8573e-ad78-4b80-b8f7-70ec827996c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.163475993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8ce8573e-ad78-4b80-b8f7-70ec827996c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:00:05 pause-705639 crio[2401]: time="2024-01-03 20:00:05.163755196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704311985082791945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704311985109015257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash: cb6296c2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704311979480811724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e9694
dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704311979448310234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704311979505632130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09,PodSandboxId:0ff251fadd26f8b93533ea1c171fc68d04afcb59e38800748c7e032b339c6346,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704311979423584507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835980ecc42799c1960492dad7b,},Annotations:map[string]s
tring{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3,PodSandboxId:4cd50b769d4b7221885b638550de6b09b111bfdaf606ea17711526dfcfb13c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704311971519238479,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwbnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dbfaa3d-dc71-48ee-9746-357990a3b6b5,},Annotations:map[string]string{io.kubernetes.container.hash:
cb6296c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be,PodSandboxId:7d9ea3e3e063f3e40b9aeb0c3a14b2152b2fc3a7460d10ea9a2b868ff48de02f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704311962081684534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fkkp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9226d155-0c50-444f-9899-7c425b5ea32e,},Annotations:map[string]string{io.kubernetes.container.hash: 877b8335,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198,PodSandboxId:586c5324ad955f559aa33b7add0038e78fdb8e93a045afc881e9de85cab0d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704311961469083325,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-705639,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1bf83f638fa3405cd004c68f3ff4d378,},Annotations:map[string]string{io.kubernetes.container.hash: 6c8cf32a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0,PodSandboxId:b3fd2466af4f494744fe3b03eb40a564b890c08a7c6466e28c4e23b75698752d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704311961052853887,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-705639,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 85e9694dda2f6e2035142215b093a9d6,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424,PodSandboxId:1da6fb4cd1f940f7cec2bd9c50060cb33f80039d55089e4055b1a92bcf9485f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_EXITED,CreatedAt:1704311961121112356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-705639,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: cd9c59c6d56232bfd7011bd5817fde97,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096,PodSandboxId:a0ddc31981afb1b9561f831d23f7dd5263c250d6c68325ab76bd3f2a89e484e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704311957809683313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-705639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0173835
980ecc42799c1960492dad7b,},Annotations:map[string]string{io.kubernetes.container.hash: a1f241a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8ce8573e-ad78-4b80-b8f7-70ec827996c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4e78cd600ca5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   20 seconds ago      Running             kube-proxy                2                   4cd50b769d4b7       kube-proxy-lwbnd
	455434fe7c018       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   20 seconds ago      Running             coredns                   2                   7d9ea3e3e063f       coredns-5dd5756b68-fkkp5
	77bb535231c22       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   25 seconds ago      Running             kube-controller-manager   2                   1da6fb4cd1f94       kube-controller-manager-pause-705639
	a33080889b8bb       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   25 seconds ago      Running             kube-scheduler            2                   b3fd2466af4f4       kube-scheduler-pause-705639
	b54a6b8473f94       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   25 seconds ago      Running             etcd                      2                   586c5324ad955       etcd-pause-705639
	6645f9e298bda       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   25 seconds ago      Running             kube-apiserver            2                   0ff251fadd26f       kube-apiserver-pause-705639
	903578371b774       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   33 seconds ago      Exited              kube-proxy                1                   4cd50b769d4b7       kube-proxy-lwbnd
	101176aa35ad2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   43 seconds ago      Exited              coredns                   1                   7d9ea3e3e063f       coredns-5dd5756b68-fkkp5
	57bfa544d32df       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   43 seconds ago      Exited              etcd                      1                   586c5324ad955       etcd-pause-705639
	a1c7f60e2d0e1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   44 seconds ago      Exited              kube-controller-manager   1                   1da6fb4cd1f94       kube-controller-manager-pause-705639
	6c2237d818c60       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   44 seconds ago      Exited              kube-scheduler            1                   b3fd2466af4f4       kube-scheduler-pause-705639
	57dcb0ae11754       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   47 seconds ago      Exited              kube-apiserver            1                   a0ddc31981afb       kube-apiserver-pause-705639
	
	
	==> coredns [101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48941 - 42552 "HINFO IN 6871104697284162923.1326580057340223885. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010157982s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [455434fe7c01851c7a44e8f2856bcd27ba50fbc171872b32ca39d3dd6e44aee1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46897 - 36895 "HINFO IN 2621362529278284310.5964583020983758366. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011387417s
	
	
	==> describe nodes <==
	Name:               pause-705639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-705639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=pause-705639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T19_58_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:58:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-705639
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:00:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:59:44 +0000   Wed, 03 Jan 2024 19:58:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:59:44 +0000   Wed, 03 Jan 2024 19:58:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:59:44 +0000   Wed, 03 Jan 2024 19:58:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:59:44 +0000   Wed, 03 Jan 2024 19:58:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.234
	  Hostname:    pause-705639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 cea6d4794b7e42e7a563c3182cb164ac
	  System UUID:                cea6d479-4b7e-42e7-a563-c3182cb164ac
	  Boot ID:                    5cd4f7f6-adce-4f0c-a3c4-478567b0ed0a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-fkkp5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     104s
	  kube-system                 etcd-pause-705639                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kube-apiserver-pause-705639             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-pause-705639    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-proxy-lwbnd                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-pause-705639             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 19s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node pause-705639 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node pause-705639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node pause-705639 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                117s                 kubelet          Node pause-705639 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node pause-705639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node pause-705639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node pause-705639 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node pause-705639 event: Registered Node pause-705639 in Controller
	  Normal  Starting                 27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 27s)    kubelet          Node pause-705639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 27s)    kubelet          Node pause-705639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 27s)    kubelet          Node pause-705639 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                   node-controller  Node pause-705639 event: Registered Node pause-705639 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062216] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.542663] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.100726] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.172876] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +6.415192] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.246403] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.139758] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.169836] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.136663] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.254092] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +10.849378] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[Jan 3 19:58] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[Jan 3 19:59] systemd-fstab-generator[2066]: Ignoring "noauto" for root device
	[  +0.226735] systemd-fstab-generator[2083]: Ignoring "noauto" for root device
	[  +0.031316] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.622802] systemd-fstab-generator[2238]: Ignoring "noauto" for root device
	[  +0.251550] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[  +0.557797] systemd-fstab-generator[2307]: Ignoring "noauto" for root device
	[ +20.391257] systemd-fstab-generator[3242]: Ignoring "noauto" for root device
	[  +7.244662] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [57bfa544d32df074fee6e6b9271e7ed98ca6c1cc6bbac788ab3f358fc790f198] <==
	{"level":"info","ts":"2024-01-03T19:59:23.480301Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:24.693698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-03T19:59:24.693775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-03T19:59:24.693817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 received MsgPreVoteResp from cd43c6a93c7b8f91 at term 2"}
	{"level":"info","ts":"2024-01-03T19:59:24.693835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became candidate at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:24.693847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 received MsgVoteResp from cd43c6a93c7b8f91 at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:24.693859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became leader at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:24.693869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cd43c6a93c7b8f91 elected leader cd43c6a93c7b8f91 at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:24.700495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:59:24.702146Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T19:59:24.702572Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:59:24.703847Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.234:2379"}
	{"level":"info","ts":"2024-01-03T19:59:24.70044Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"cd43c6a93c7b8f91","local-member-attributes":"{Name:pause-705639 ClientURLs:[https://192.168.83.234:2379]}","request-path":"/0/members/cd43c6a93c7b8f91/attributes","cluster-id":"29ca0c39bca1c057","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T19:59:24.707562Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T19:59:24.707632Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T19:59:36.924662Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-03T19:59:36.924818Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-705639","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.234:2380"],"advertise-client-urls":["https://192.168.83.234:2379"]}
	{"level":"warn","ts":"2024-01-03T19:59:36.924987Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T19:59:36.92509Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T19:59:36.927154Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.234:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T19:59:36.927199Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.234:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-03T19:59:36.928611Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cd43c6a93c7b8f91","current-leader-member-id":"cd43c6a93c7b8f91"}
	{"level":"info","ts":"2024-01-03T19:59:36.9328Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:36.932979Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:36.933036Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-705639","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.234:2380"],"advertise-client-urls":["https://192.168.83.234:2379"]}
	
	
	==> etcd [b54a6b8473f946ad2d47d0c5bccc78099bb3853bce060f3cab8e0cc8ed8a2f9d] <==
	{"level":"info","ts":"2024-01-03T19:59:41.103227Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-01-03T19:59:41.103347Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T19:59:41.106951Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T19:59:41.106964Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T19:59:41.105697Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:41.108023Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.234:2380"}
	{"level":"info","ts":"2024-01-03T19:59:41.106187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 switched to configuration voters=(14790884031381344145)"}
	{"level":"info","ts":"2024-01-03T19:59:41.108158Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"29ca0c39bca1c057","local-member-id":"cd43c6a93c7b8f91","added-peer-id":"cd43c6a93c7b8f91","added-peer-peer-urls":["https://192.168.83.234:2380"]}
	{"level":"info","ts":"2024-01-03T19:59:41.108388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"29ca0c39bca1c057","local-member-id":"cd43c6a93c7b8f91","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:59:41.108476Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:59:42.041269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 is starting a new election at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:42.041332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:42.04136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 received MsgPreVoteResp from cd43c6a93c7b8f91 at term 3"}
	{"level":"info","ts":"2024-01-03T19:59:42.041388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became candidate at term 4"}
	{"level":"info","ts":"2024-01-03T19:59:42.041394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 received MsgVoteResp from cd43c6a93c7b8f91 at term 4"}
	{"level":"info","ts":"2024-01-03T19:59:42.041403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd43c6a93c7b8f91 became leader at term 4"}
	{"level":"info","ts":"2024-01-03T19:59:42.041409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cd43c6a93c7b8f91 elected leader cd43c6a93c7b8f91 at term 4"}
	{"level":"info","ts":"2024-01-03T19:59:42.04304Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"cd43c6a93c7b8f91","local-member-attributes":"{Name:pause-705639 ClientURLs:[https://192.168.83.234:2379]}","request-path":"/0/members/cd43c6a93c7b8f91/attributes","cluster-id":"29ca0c39bca1c057","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T19:59:42.043226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:59:42.044344Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.234:2379"}
	{"level":"info","ts":"2024-01-03T19:59:42.044918Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:59:42.045612Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T19:59:42.04565Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T19:59:42.045838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T19:59:44.138329Z","caller":"traceutil/trace.go:171","msg":"trace[844554984] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"109.850362ms","start":"2024-01-03T19:59:44.028459Z","end":"2024-01-03T19:59:44.13831Z","steps":["trace[844554984] 'process raft request'  (duration: 82.134231ms)","trace[844554984] 'compare'  (duration: 27.425284ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:00:05 up 2 min,  0 users,  load average: 1.73, 0.69, 0.25
	Linux pause-705639 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [57dcb0ae1175465ae550344541901cc54c4f621a16c787ffac5585ed8c4d8096] <==
	
	
	==> kube-apiserver [6645f9e298bdada5f9bea0c6ce1e25771590562d4af0bd309743e755e6c70c09] <==
	I0103 19:59:43.829621       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 19:59:43.877876       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0103 19:59:43.877940       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0103 19:59:43.976086       1 shared_informer.go:318] Caches are synced for configmaps
	I0103 19:59:43.976150       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0103 19:59:43.976347       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 19:59:43.979251       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0103 19:59:43.984856       1 aggregator.go:166] initial CRD sync complete...
	I0103 19:59:43.984934       1 autoregister_controller.go:141] Starting autoregister controller
	I0103 19:59:43.984966       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0103 19:59:43.985011       1 cache.go:39] Caches are synced for autoregister controller
	I0103 19:59:43.990724       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 19:59:44.004450       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0103 19:59:44.035270       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0103 19:59:44.035329       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0103 19:59:44.036296       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0103 19:59:44.041350       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 19:59:44.873934       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0103 19:59:46.074152       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0103 19:59:46.092891       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0103 19:59:46.170831       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0103 19:59:46.209058       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 19:59:46.228931       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0103 19:59:56.870234       1 controller.go:624] quota admission added evaluator for: endpoints
	I0103 19:59:56.876735       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [77bb535231c2293addc26368e62c04a3fba1b78e6f33736b626290437a5d1aff] <==
	I0103 19:59:56.696387       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0103 19:59:56.696606       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-705639"
	I0103 19:59:56.696711       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0103 19:59:56.696779       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0103 19:59:56.697389       1 taint_manager.go:210] "Sending events to api server"
	I0103 19:59:56.697915       1 event.go:307] "Event occurred" object="pause-705639" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-705639 event: Registered Node pause-705639 in Controller"
	I0103 19:59:56.709888       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0103 19:59:56.710014       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0103 19:59:56.712622       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0103 19:59:56.713782       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0103 19:59:56.717909       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0103 19:59:56.726387       1 shared_informer.go:318] Caches are synced for HPA
	I0103 19:59:56.727867       1 shared_informer.go:318] Caches are synced for daemon sets
	I0103 19:59:56.730666       1 shared_informer.go:318] Caches are synced for job
	I0103 19:59:56.735931       1 shared_informer.go:318] Caches are synced for TTL
	I0103 19:59:56.741421       1 shared_informer.go:318] Caches are synced for attach detach
	I0103 19:59:56.748735       1 shared_informer.go:318] Caches are synced for PV protection
	I0103 19:59:56.808161       1 shared_informer.go:318] Caches are synced for resource quota
	I0103 19:59:56.809402       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0103 19:59:56.827843       1 shared_informer.go:318] Caches are synced for resource quota
	I0103 19:59:56.853939       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0103 19:59:56.857676       1 shared_informer.go:318] Caches are synced for endpoint
	I0103 19:59:57.268463       1 shared_informer.go:318] Caches are synced for garbage collector
	I0103 19:59:57.273859       1 shared_informer.go:318] Caches are synced for garbage collector
	I0103 19:59:57.273968       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [a1c7f60e2d0e1a72a5badd3317f1eb17c532ba8d564f595f9dbceec121b24424] <==
	I0103 19:59:23.035149       1 serving.go:348] Generated self-signed cert in-memory
	I0103 19:59:24.057853       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0103 19:59:24.057909       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:59:24.060073       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0103 19:59:24.061094       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0103 19:59:24.061690       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0103 19:59:24.061908       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0103 19:59:34.064472       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.234:8443/healthz\": dial tcp 192.168.83.234:8443: connect: connection refused"
	
	
	==> kube-proxy [903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3] <==
	
	
	==> kube-proxy [c4e78cd600ca588f554ee07b43d3401d215e526075fb0fd6d8c89562ade74c7d] <==
	I0103 19:59:45.465668       1 server_others.go:69] "Using iptables proxy"
	I0103 19:59:45.490440       1 node.go:141] Successfully retrieved node IP: 192.168.83.234
	I0103 19:59:45.553154       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0103 19:59:45.553257       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 19:59:45.561768       1 server_others.go:152] "Using iptables Proxier"
	I0103 19:59:45.561907       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 19:59:45.562326       1 server.go:846] "Version info" version="v1.28.4"
	I0103 19:59:45.562366       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:59:45.563895       1 config.go:188] "Starting service config controller"
	I0103 19:59:45.563975       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 19:59:45.564021       1 config.go:97] "Starting endpoint slice config controller"
	I0103 19:59:45.564054       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 19:59:45.564800       1 config.go:315] "Starting node config controller"
	I0103 19:59:45.564892       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 19:59:45.664601       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 19:59:45.664650       1 shared_informer.go:318] Caches are synced for service config
	I0103 19:59:45.665098       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6c2237d818c602824bb96ebeb79ae1a17cf8eec0bbf8bb9c1df1d6c42898c8d0] <==
	E0103 19:59:32.270808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.83.234:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.326665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.83.234:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.326728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.83.234:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.353759       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.83.234:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.353826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.83.234:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.436254       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.83.234:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.436326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.83.234:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.508094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.234:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.508166       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.234:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.649000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.83.234:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.649070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.83.234:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.738224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.83.234:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.738255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.83.234:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:32.886637       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.83.234:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:32.886779       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.83.234:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:33.062326       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.83.234:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:33.062577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.83.234:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:33.792745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.83.234:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:33.792864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.83.234:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:34.162398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.83.234:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:34.162565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.83.234:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	W0103 19:59:34.385411       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.83.234:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:34.385651       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.83.234:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	E0103 19:59:37.081006       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0103 19:59:37.081714       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a33080889b8bb498826a4410f447222ced0cec3d0fd7fbba815e1281c6f0425b] <==
	I0103 19:59:42.051057       1 serving.go:348] Generated self-signed cert in-memory
	W0103 19:59:43.908414       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 19:59:43.908497       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 19:59:43.908549       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 19:59:43.908558       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 19:59:43.991997       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0103 19:59:43.992086       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:59:43.993620       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 19:59:43.993702       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 19:59:43.994591       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 19:59:43.996022       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 19:59:44.094130       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 19:57:30 UTC, ends at Wed 2024-01-03 20:00:05 UTC. --
	Jan 03 19:59:39 pause-705639 kubelet[3248]: E0103 19:59:39.475598    3248 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.234:8443: connect: connection refused" node="pause-705639"
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.159687    3248 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-705639?timeout=10s\": dial tcp 192.168.83.234:8443: connect: connection refused" interval="1.6s"
	Jan 03 19:59:40 pause-705639 kubelet[3248]: W0103 19:59:40.166340    3248 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-705639&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.166431    3248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-705639&limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: W0103 19:59:40.171977    3248 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.172068    3248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: W0103 19:59:40.260986    3248 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.261085    3248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: I0103 19:59:40.277279    3248 kubelet_node_status.go:70] "Attempting to register node" node="pause-705639"
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.277780    3248 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.234:8443: connect: connection refused" node="pause-705639"
	Jan 03 19:59:40 pause-705639 kubelet[3248]: W0103 19:59:40.305801    3248 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:40 pause-705639 kubelet[3248]: E0103 19:59:40.305878    3248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.234:8443: connect: connection refused
	Jan 03 19:59:41 pause-705639 kubelet[3248]: I0103 19:59:41.880214    3248 kubelet_node_status.go:70] "Attempting to register node" node="pause-705639"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.025111    3248 kubelet_node_status.go:108] "Node was previously registered" node="pause-705639"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.025235    3248 kubelet_node_status.go:73] "Successfully registered node" node="pause-705639"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.029284    3248 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.031026    3248 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.740634    3248 apiserver.go:52] "Watching apiserver"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.747674    3248 topology_manager.go:215] "Topology Admit Handler" podUID="2dbfaa3d-dc71-48ee-9746-357990a3b6b5" podNamespace="kube-system" podName="kube-proxy-lwbnd"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.747971    3248 topology_manager.go:215] "Topology Admit Handler" podUID="9226d155-0c50-444f-9899-7c425b5ea32e" podNamespace="kube-system" podName="coredns-5dd5756b68-fkkp5"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.753991    3248 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.796042    3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dbfaa3d-dc71-48ee-9746-357990a3b6b5-xtables-lock\") pod \"kube-proxy-lwbnd\" (UID: \"2dbfaa3d-dc71-48ee-9746-357990a3b6b5\") " pod="kube-system/kube-proxy-lwbnd"
	Jan 03 19:59:44 pause-705639 kubelet[3248]: I0103 19:59:44.796118    3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dbfaa3d-dc71-48ee-9746-357990a3b6b5-lib-modules\") pod \"kube-proxy-lwbnd\" (UID: \"2dbfaa3d-dc71-48ee-9746-357990a3b6b5\") " pod="kube-system/kube-proxy-lwbnd"
	Jan 03 19:59:45 pause-705639 kubelet[3248]: I0103 19:59:45.049428    3248 scope.go:117] "RemoveContainer" containerID="903578371b774d38b9816330b3a9c348bd260670a0b2eaf6e4a0d9f7257a25d3"
	Jan 03 19:59:45 pause-705639 kubelet[3248]: I0103 19:59:45.050196    3248 scope.go:117] "RemoveContainer" containerID="101176aa35ad26cd0a4f111845d7a4e730e0cad65ea629b64409e2db2d12d0be"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-705639 -n pause-705639
helpers_test.go:261: (dbg) Run:  kubectl --context pause-705639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (99.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-927922 --alsologtostderr -v=3
E0103 20:05:29.904457   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:05:31.705014   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:05:42.934624   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:05:48.653639   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-927922 --alsologtostderr -v=3: exit status 82 (2m1.037268859s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-927922"  ...
	* Stopping node "old-k8s-version-927922"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:05:21.031353   60490 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:05:21.031588   60490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:05:21.031623   60490 out.go:309] Setting ErrFile to fd 2...
	I0103 20:05:21.031638   60490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:05:21.031810   60490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:05:21.032080   60490 out.go:303] Setting JSON to false
	I0103 20:05:21.032191   60490 mustload.go:65] Loading cluster: old-k8s-version-927922
	I0103 20:05:21.032527   60490 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:05:21.032609   60490 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/config.json ...
	I0103 20:05:21.032778   60490 mustload.go:65] Loading cluster: old-k8s-version-927922
	I0103 20:05:21.032913   60490 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:05:21.032951   60490 stop.go:39] StopHost: old-k8s-version-927922
	I0103 20:05:21.033353   60490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:05:21.033452   60490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:05:21.053343   60490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I0103 20:05:21.053976   60490 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:05:21.054761   60490 main.go:141] libmachine: Using API Version  1
	I0103 20:05:21.054786   60490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:05:21.055118   60490 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:05:21.057881   60490 out.go:177] * Stopping node "old-k8s-version-927922"  ...
	I0103 20:05:21.059252   60490 main.go:141] libmachine: Stopping "old-k8s-version-927922"...
	I0103 20:05:21.059276   60490 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:05:21.061043   60490 main.go:141] libmachine: (old-k8s-version-927922) Calling .Stop
	I0103 20:05:21.064870   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 0/60
	I0103 20:05:22.066670   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 1/60
	I0103 20:05:23.068029   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 2/60
	I0103 20:05:24.070626   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 3/60
	I0103 20:05:25.072267   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 4/60
	I0103 20:05:26.074906   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 5/60
	I0103 20:05:27.077262   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 6/60
	I0103 20:05:28.078911   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 7/60
	I0103 20:05:29.081248   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 8/60
	I0103 20:05:30.083516   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 9/60
	I0103 20:05:31.085356   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 10/60
	I0103 20:05:32.087332   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 11/60
	I0103 20:05:33.089514   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 12/60
	I0103 20:05:34.091579   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 13/60
	I0103 20:05:35.093532   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 14/60
	I0103 20:05:36.095813   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 15/60
	I0103 20:05:37.097416   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 16/60
	I0103 20:05:38.099701   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 17/60
	I0103 20:05:39.101693   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 18/60
	I0103 20:05:40.103530   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 19/60
	I0103 20:05:41.106040   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 20/60
	I0103 20:05:42.108272   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 21/60
	I0103 20:05:43.110616   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 22/60
	I0103 20:05:44.112163   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 23/60
	I0103 20:05:45.114722   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 24/60
	I0103 20:05:46.116824   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 25/60
	I0103 20:05:47.118184   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 26/60
	I0103 20:05:48.120131   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 27/60
	I0103 20:05:49.121870   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 28/60
	I0103 20:05:50.123982   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 29/60
	I0103 20:05:51.125959   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 30/60
	I0103 20:05:52.127578   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 31/60
	I0103 20:05:53.129952   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 32/60
	I0103 20:05:54.131739   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 33/60
	I0103 20:05:55.134139   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 34/60
	I0103 20:05:56.136195   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 35/60
	I0103 20:05:57.138237   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 36/60
	I0103 20:05:58.139922   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 37/60
	I0103 20:05:59.141035   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 38/60
	I0103 20:06:00.142507   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 39/60
	I0103 20:06:01.144338   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 40/60
	I0103 20:06:02.145683   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 41/60
	I0103 20:06:03.147423   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 42/60
	I0103 20:06:04.149199   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 43/60
	I0103 20:06:05.150476   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 44/60
	I0103 20:06:06.152854   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 45/60
	I0103 20:06:07.154211   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 46/60
	I0103 20:06:08.155388   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 47/60
	I0103 20:06:09.157180   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 48/60
	I0103 20:06:10.159081   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 49/60
	I0103 20:06:11.161324   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 50/60
	I0103 20:06:12.163174   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 51/60
	I0103 20:06:13.165131   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 52/60
	I0103 20:06:14.166842   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 53/60
	I0103 20:06:15.168374   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 54/60
	I0103 20:06:16.170876   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 55/60
	I0103 20:06:17.172255   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 56/60
	I0103 20:06:18.173670   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 57/60
	I0103 20:06:19.175116   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 58/60
	I0103 20:06:20.176446   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 59/60
	I0103 20:06:21.177696   60490 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:06:21.177752   60490 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:06:21.177773   60490 retry.go:31] will retry after 694.538053ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:06:21.872569   60490 stop.go:39] StopHost: old-k8s-version-927922
	I0103 20:06:21.873029   60490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:06:21.873082   60490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:06:21.891009   60490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0103 20:06:21.891599   60490 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:06:21.892126   60490 main.go:141] libmachine: Using API Version  1
	I0103 20:06:21.892151   60490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:06:21.892469   60490 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:06:21.894173   60490 out.go:177] * Stopping node "old-k8s-version-927922"  ...
	I0103 20:06:21.895511   60490 main.go:141] libmachine: Stopping "old-k8s-version-927922"...
	I0103 20:06:21.895532   60490 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:06:21.897329   60490 main.go:141] libmachine: (old-k8s-version-927922) Calling .Stop
	I0103 20:06:21.901217   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 0/60
	I0103 20:06:22.902633   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 1/60
	I0103 20:06:23.903894   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 2/60
	I0103 20:06:24.905146   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 3/60
	I0103 20:06:25.906586   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 4/60
	I0103 20:06:26.908052   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 5/60
	I0103 20:06:27.909405   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 6/60
	I0103 20:06:28.910771   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 7/60
	I0103 20:06:29.913180   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 8/60
	I0103 20:06:30.914670   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 9/60
	I0103 20:06:31.916007   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 10/60
	I0103 20:06:32.917603   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 11/60
	I0103 20:06:33.919042   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 12/60
	I0103 20:06:34.920314   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 13/60
	I0103 20:06:35.921728   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 14/60
	I0103 20:06:36.923178   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 15/60
	I0103 20:06:37.924534   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 16/60
	I0103 20:06:38.926156   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 17/60
	I0103 20:06:39.927711   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 18/60
	I0103 20:06:40.929158   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 19/60
	I0103 20:06:41.931082   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 20/60
	I0103 20:06:42.932441   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 21/60
	I0103 20:06:43.933816   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 22/60
	I0103 20:06:44.935259   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 23/60
	I0103 20:06:45.936709   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 24/60
	I0103 20:06:46.938139   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 25/60
	I0103 20:06:47.939773   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 26/60
	I0103 20:06:48.941042   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 27/60
	I0103 20:06:49.942491   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 28/60
	I0103 20:06:50.943908   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 29/60
	I0103 20:06:51.945658   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 30/60
	I0103 20:06:52.947134   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 31/60
	I0103 20:06:53.948898   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 32/60
	I0103 20:06:54.950197   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 33/60
	I0103 20:06:55.951509   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 34/60
	I0103 20:06:56.952883   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 35/60
	I0103 20:06:57.954087   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 36/60
	I0103 20:06:58.955575   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 37/60
	I0103 20:06:59.956938   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 38/60
	I0103 20:07:00.958603   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 39/60
	I0103 20:07:01.960839   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 40/60
	I0103 20:07:02.962511   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 41/60
	I0103 20:07:03.964039   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 42/60
	I0103 20:07:04.965608   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 43/60
	I0103 20:07:05.967150   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 44/60
	I0103 20:07:06.969647   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 45/60
	I0103 20:07:07.971128   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 46/60
	I0103 20:07:08.972789   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 47/60
	I0103 20:07:09.974496   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 48/60
	I0103 20:07:10.975901   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 49/60
	I0103 20:07:11.977644   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 50/60
	I0103 20:07:12.979323   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 51/60
	I0103 20:07:13.980809   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 52/60
	I0103 20:07:14.982635   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 53/60
	I0103 20:07:15.984139   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 54/60
	I0103 20:07:16.985649   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 55/60
	I0103 20:07:17.987307   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 56/60
	I0103 20:07:18.989042   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 57/60
	I0103 20:07:19.990442   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 58/60
	I0103 20:07:20.991874   60490 main.go:141] libmachine: (old-k8s-version-927922) Waiting for machine to stop 59/60
	I0103 20:07:21.993389   60490 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:07:21.993443   60490 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:07:21.995505   60490 out.go:177] 
	W0103 20:07:21.997284   60490 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0103 20:07:21.997307   60490 out.go:239] * 
	* 
	W0103 20:07:21.999679   60490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 20:07:22.001459   60490 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-927922 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-927922 -n old-k8s-version-927922
E0103 20:07:23.518210   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:07:27.748286   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:27.753613   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:27.763970   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:27.784281   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:27.824641   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:27.905051   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:28.065491   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:28.385621   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:29.025879   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:30.306812   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:32.785815   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:07:32.867034   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:07:37.987802   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-927922 -n old-k8s-version-927922: exit status 3 (18.613538689s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:07:40.614841   61188 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.12:22: connect: no route to host
	E0103 20:07:40.614874   61188 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.12:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-927922" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-451331 --alsologtostderr -v=3
E0103 20:06:10.865378   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-451331 --alsologtostderr -v=3: exit status 82 (2m1.506869155s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-451331"  ...
	* Stopping node "embed-certs-451331"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:06:01.129206   60762 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:06:01.129522   60762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:06:01.129536   60762 out.go:309] Setting ErrFile to fd 2...
	I0103 20:06:01.129542   60762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:06:01.129880   60762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:06:01.130282   60762 out.go:303] Setting JSON to false
	I0103 20:06:01.130394   60762 mustload.go:65] Loading cluster: embed-certs-451331
	I0103 20:06:01.130971   60762 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:06:01.131080   60762 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/config.json ...
	I0103 20:06:01.131344   60762 mustload.go:65] Loading cluster: embed-certs-451331
	I0103 20:06:01.131518   60762 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:06:01.131549   60762 stop.go:39] StopHost: embed-certs-451331
	I0103 20:06:01.132141   60762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:06:01.132204   60762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:06:01.149131   60762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0103 20:06:01.149594   60762 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:06:01.150199   60762 main.go:141] libmachine: Using API Version  1
	I0103 20:06:01.150229   60762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:06:01.150632   60762 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:06:01.152921   60762 out.go:177] * Stopping node "embed-certs-451331"  ...
	I0103 20:06:01.154589   60762 main.go:141] libmachine: Stopping "embed-certs-451331"...
	I0103 20:06:01.154617   60762 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:06:01.156687   60762 main.go:141] libmachine: (embed-certs-451331) Calling .Stop
	I0103 20:06:01.160739   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 0/60
	I0103 20:06:02.162286   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 1/60
	I0103 20:06:03.163696   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 2/60
	I0103 20:06:04.165320   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 3/60
	I0103 20:06:05.166884   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 4/60
	I0103 20:06:06.168972   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 5/60
	I0103 20:06:07.171290   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 6/60
	I0103 20:06:08.172998   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 7/60
	I0103 20:06:09.175020   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 8/60
	I0103 20:06:10.176478   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 9/60
	I0103 20:06:11.178611   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 10/60
	I0103 20:06:12.180019   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 11/60
	I0103 20:06:13.181813   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 12/60
	I0103 20:06:14.184083   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 13/60
	I0103 20:06:15.185277   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 14/60
	I0103 20:06:16.187211   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 15/60
	I0103 20:06:17.189245   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 16/60
	I0103 20:06:18.190978   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 17/60
	I0103 20:06:19.193275   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 18/60
	I0103 20:06:20.194547   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 19/60
	I0103 20:06:21.196372   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 20/60
	I0103 20:06:22.197902   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 21/60
	I0103 20:06:23.199306   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 22/60
	I0103 20:06:24.200521   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 23/60
	I0103 20:06:25.202244   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 24/60
	I0103 20:06:26.204429   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 25/60
	I0103 20:06:27.205868   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 26/60
	I0103 20:06:28.207895   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 27/60
	I0103 20:06:29.209343   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 28/60
	I0103 20:06:30.211091   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 29/60
	I0103 20:06:31.213354   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 30/60
	I0103 20:06:32.215329   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 31/60
	I0103 20:06:33.217027   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 32/60
	I0103 20:06:34.219080   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 33/60
	I0103 20:06:35.220358   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 34/60
	I0103 20:06:36.222288   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 35/60
	I0103 20:06:37.223718   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 36/60
	I0103 20:06:38.224983   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 37/60
	I0103 20:06:39.226468   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 38/60
	I0103 20:06:40.227723   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 39/60
	I0103 20:06:41.230044   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 40/60
	I0103 20:06:42.231836   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 41/60
	I0103 20:06:43.233050   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 42/60
	I0103 20:06:44.234617   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 43/60
	I0103 20:06:45.236112   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 44/60
	I0103 20:06:46.238401   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 45/60
	I0103 20:06:47.240015   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 46/60
	I0103 20:06:48.241565   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 47/60
	I0103 20:06:49.243215   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 48/60
	I0103 20:06:50.244734   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 49/60
	I0103 20:06:51.247046   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 50/60
	I0103 20:06:52.248558   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 51/60
	I0103 20:06:53.250153   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 52/60
	I0103 20:06:54.251607   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 53/60
	I0103 20:06:55.253134   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 54/60
	I0103 20:06:56.255107   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 55/60
	I0103 20:06:57.257180   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 56/60
	I0103 20:06:58.258721   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 57/60
	I0103 20:06:59.260997   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 58/60
	I0103 20:07:00.262597   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 59/60
	I0103 20:07:01.263067   60762 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:07:01.263134   60762 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:07:01.263159   60762 retry.go:31] will retry after 1.181002074s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:07:02.444467   60762 stop.go:39] StopHost: embed-certs-451331
	I0103 20:07:02.444986   60762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:07:02.445041   60762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:07:02.459844   60762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0103 20:07:02.460292   60762 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:07:02.460773   60762 main.go:141] libmachine: Using API Version  1
	I0103 20:07:02.460800   60762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:07:02.461165   60762 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:07:02.464461   60762 out.go:177] * Stopping node "embed-certs-451331"  ...
	I0103 20:07:02.465851   60762 main.go:141] libmachine: Stopping "embed-certs-451331"...
	I0103 20:07:02.465876   60762 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:07:02.467711   60762 main.go:141] libmachine: (embed-certs-451331) Calling .Stop
	I0103 20:07:02.471511   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 0/60
	I0103 20:07:03.472801   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 1/60
	I0103 20:07:04.473970   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 2/60
	I0103 20:07:05.475448   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 3/60
	I0103 20:07:06.476806   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 4/60
	I0103 20:07:07.478447   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 5/60
	I0103 20:07:08.479653   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 6/60
	I0103 20:07:09.480864   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 7/60
	I0103 20:07:10.481978   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 8/60
	I0103 20:07:11.483118   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 9/60
	I0103 20:07:12.484840   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 10/60
	I0103 20:07:13.486105   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 11/60
	I0103 20:07:14.487158   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 12/60
	I0103 20:07:15.489071   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 13/60
	I0103 20:07:16.490104   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 14/60
	I0103 20:07:17.491702   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 15/60
	I0103 20:07:18.492802   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 16/60
	I0103 20:07:19.493953   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 17/60
	I0103 20:07:20.495322   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 18/60
	I0103 20:07:21.496415   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 19/60
	I0103 20:07:22.498248   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 20/60
	I0103 20:07:23.499718   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 21/60
	I0103 20:07:24.501717   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 22/60
	I0103 20:07:25.503305   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 23/60
	I0103 20:07:26.504927   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 24/60
	I0103 20:07:27.506768   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 25/60
	I0103 20:07:28.508734   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 26/60
	I0103 20:07:29.510861   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 27/60
	I0103 20:07:30.512653   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 28/60
	I0103 20:07:31.513992   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 29/60
	I0103 20:07:32.515928   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 30/60
	I0103 20:07:33.517659   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 31/60
	I0103 20:07:34.519575   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 32/60
	I0103 20:07:35.521122   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 33/60
	I0103 20:07:36.522623   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 34/60
	I0103 20:07:37.524588   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 35/60
	I0103 20:07:38.526091   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 36/60
	I0103 20:07:39.528018   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 37/60
	I0103 20:07:40.529387   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 38/60
	I0103 20:07:41.531036   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 39/60
	I0103 20:07:42.532864   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 40/60
	I0103 20:07:43.534379   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 41/60
	I0103 20:07:44.535725   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 42/60
	I0103 20:07:45.537068   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 43/60
	I0103 20:07:46.538473   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 44/60
	I0103 20:07:47.540643   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 45/60
	I0103 20:07:48.542114   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 46/60
	I0103 20:07:49.543630   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 47/60
	I0103 20:07:50.545145   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 48/60
	I0103 20:07:51.546883   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 49/60
	I0103 20:07:52.548730   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 50/60
	I0103 20:07:53.550357   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 51/60
	I0103 20:07:54.551922   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 52/60
	I0103 20:07:55.553505   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 53/60
	I0103 20:07:56.555074   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 54/60
	I0103 20:07:57.557058   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 55/60
	I0103 20:07:58.558564   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 56/60
	I0103 20:07:59.560443   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 57/60
	I0103 20:08:00.561888   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 58/60
	I0103 20:08:01.563777   60762 main.go:141] libmachine: (embed-certs-451331) Waiting for machine to stop 59/60
	I0103 20:08:02.564896   60762 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:08:02.564943   60762 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:08:02.567267   60762 out.go:177] 
	W0103 20:08:02.568867   60762 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0103 20:08:02.568883   60762 out.go:239] * 
	* 
	W0103 20:08:02.571071   60762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 20:08:02.572556   60762 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-451331 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451331 -n embed-certs-451331
E0103 20:08:04.479046   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:08:08.709066   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451331 -n embed-certs-451331: exit status 3 (18.488486055s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:08:21.062854   61463 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.197:22: connect: no route to host
	E0103 20:08:21.062875   61463 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.197:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-451331" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-749210 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-749210 --alsologtostderr -v=3: exit status 82 (2m1.796001627s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-749210"  ...
	* Stopping node "no-preload-749210"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:06:32.088113   60976 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:06:32.088384   60976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:06:32.088396   60976 out.go:309] Setting ErrFile to fd 2...
	I0103 20:06:32.088403   60976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:06:32.088658   60976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:06:32.088929   60976 out.go:303] Setting JSON to false
	I0103 20:06:32.089025   60976 mustload.go:65] Loading cluster: no-preload-749210
	I0103 20:06:32.089458   60976 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:06:32.089551   60976 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:06:32.089765   60976 mustload.go:65] Loading cluster: no-preload-749210
	I0103 20:06:32.089905   60976 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:06:32.089943   60976 stop.go:39] StopHost: no-preload-749210
	I0103 20:06:32.090358   60976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:06:32.090410   60976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:06:32.104670   60976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0103 20:06:32.105127   60976 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:06:32.105743   60976 main.go:141] libmachine: Using API Version  1
	I0103 20:06:32.105798   60976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:06:32.106122   60976 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:06:32.108773   60976 out.go:177] * Stopping node "no-preload-749210"  ...
	I0103 20:06:32.110250   60976 main.go:141] libmachine: Stopping "no-preload-749210"...
	I0103 20:06:32.110270   60976 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:06:32.112269   60976 main.go:141] libmachine: (no-preload-749210) Calling .Stop
	I0103 20:06:32.116473   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 0/60
	I0103 20:06:33.118215   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 1/60
	I0103 20:06:34.119711   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 2/60
	I0103 20:06:35.121090   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 3/60
	I0103 20:06:36.122612   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 4/60
	I0103 20:06:37.124567   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 5/60
	I0103 20:06:38.126158   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 6/60
	I0103 20:06:39.127616   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 7/60
	I0103 20:06:40.129457   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 8/60
	I0103 20:06:41.131096   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 9/60
	I0103 20:06:42.133075   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 10/60
	I0103 20:06:43.134721   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 11/60
	I0103 20:06:44.136303   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 12/60
	I0103 20:06:45.137854   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 13/60
	I0103 20:06:46.139322   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 14/60
	I0103 20:06:47.141505   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 15/60
	I0103 20:06:48.143208   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 16/60
	I0103 20:06:49.144905   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 17/60
	I0103 20:06:50.146569   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 18/60
	I0103 20:06:51.148050   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 19/60
	I0103 20:06:52.150487   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 20/60
	I0103 20:06:53.152421   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 21/60
	I0103 20:06:54.154088   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 22/60
	I0103 20:06:55.155661   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 23/60
	I0103 20:06:56.157314   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 24/60
	I0103 20:06:57.159423   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 25/60
	I0103 20:06:58.160742   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 26/60
	I0103 20:06:59.162098   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 27/60
	I0103 20:07:00.163460   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 28/60
	I0103 20:07:01.165311   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 29/60
	I0103 20:07:02.168217   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 30/60
	I0103 20:07:03.169495   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 31/60
	I0103 20:07:04.171399   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 32/60
	I0103 20:07:05.172961   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 33/60
	I0103 20:07:06.174482   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 34/60
	I0103 20:07:07.176662   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 35/60
	I0103 20:07:08.178282   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 36/60
	I0103 20:07:09.180131   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 37/60
	I0103 20:07:10.181442   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 38/60
	I0103 20:07:11.183034   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 39/60
	I0103 20:07:12.185412   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 40/60
	I0103 20:07:13.187024   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 41/60
	I0103 20:07:14.188683   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 42/60
	I0103 20:07:15.190172   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 43/60
	I0103 20:07:16.191706   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 44/60
	I0103 20:07:17.193947   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 45/60
	I0103 20:07:18.195297   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 46/60
	I0103 20:07:19.196787   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 47/60
	I0103 20:07:20.198491   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 48/60
	I0103 20:07:21.200087   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 49/60
	I0103 20:07:22.202219   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 50/60
	I0103 20:07:23.203589   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 51/60
	I0103 20:07:24.205009   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 52/60
	I0103 20:07:25.206646   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 53/60
	I0103 20:07:26.208196   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 54/60
	I0103 20:07:27.210418   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 55/60
	I0103 20:07:28.212228   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 56/60
	I0103 20:07:29.213907   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 57/60
	I0103 20:07:30.215413   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 58/60
	I0103 20:07:31.216972   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 59/60
	I0103 20:07:32.218396   60976 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:07:32.218452   60976 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:07:32.218478   60976 retry.go:31] will retry after 1.473701661s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:07:33.693106   60976 stop.go:39] StopHost: no-preload-749210
	I0103 20:07:33.693626   60976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:07:33.693681   60976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:07:33.708061   60976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0103 20:07:33.708475   60976 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:07:33.708992   60976 main.go:141] libmachine: Using API Version  1
	I0103 20:07:33.709028   60976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:07:33.709323   60976 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:07:33.712440   60976 out.go:177] * Stopping node "no-preload-749210"  ...
	I0103 20:07:33.713791   60976 main.go:141] libmachine: Stopping "no-preload-749210"...
	I0103 20:07:33.713812   60976 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:07:33.715692   60976 main.go:141] libmachine: (no-preload-749210) Calling .Stop
	I0103 20:07:33.719398   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 0/60
	I0103 20:07:34.721163   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 1/60
	I0103 20:07:35.722647   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 2/60
	I0103 20:07:36.724259   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 3/60
	I0103 20:07:37.725701   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 4/60
	I0103 20:07:38.727613   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 5/60
	I0103 20:07:39.729185   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 6/60
	I0103 20:07:40.730245   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 7/60
	I0103 20:07:41.732065   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 8/60
	I0103 20:07:42.733399   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 9/60
	I0103 20:07:43.735374   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 10/60
	I0103 20:07:44.736887   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 11/60
	I0103 20:07:45.738497   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 12/60
	I0103 20:07:46.739818   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 13/60
	I0103 20:07:47.741208   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 14/60
	I0103 20:07:48.743180   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 15/60
	I0103 20:07:49.744775   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 16/60
	I0103 20:07:50.746126   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 17/60
	I0103 20:07:51.747704   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 18/60
	I0103 20:07:52.749162   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 19/60
	I0103 20:07:53.751066   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 20/60
	I0103 20:07:54.752502   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 21/60
	I0103 20:07:55.754085   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 22/60
	I0103 20:07:56.755338   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 23/60
	I0103 20:07:57.757073   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 24/60
	I0103 20:07:58.758975   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 25/60
	I0103 20:07:59.760368   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 26/60
	I0103 20:08:00.762038   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 27/60
	I0103 20:08:01.763778   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 28/60
	I0103 20:08:02.765780   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 29/60
	I0103 20:08:03.767825   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 30/60
	I0103 20:08:04.769861   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 31/60
	I0103 20:08:05.771121   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 32/60
	I0103 20:08:06.772517   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 33/60
	I0103 20:08:07.773748   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 34/60
	I0103 20:08:08.775457   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 35/60
	I0103 20:08:09.776948   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 36/60
	I0103 20:08:10.778470   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 37/60
	I0103 20:08:11.780000   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 38/60
	I0103 20:08:12.781587   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 39/60
	I0103 20:08:13.783485   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 40/60
	I0103 20:08:14.784944   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 41/60
	I0103 20:08:15.786440   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 42/60
	I0103 20:08:16.787920   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 43/60
	I0103 20:08:17.789568   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 44/60
	I0103 20:08:18.791586   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 45/60
	I0103 20:08:19.793178   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 46/60
	I0103 20:08:20.794826   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 47/60
	I0103 20:08:21.796372   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 48/60
	I0103 20:08:22.797968   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 49/60
	I0103 20:08:23.799766   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 50/60
	I0103 20:08:24.801257   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 51/60
	I0103 20:08:25.802875   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 52/60
	I0103 20:08:26.804365   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 53/60
	I0103 20:08:27.805909   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 54/60
	I0103 20:08:28.807843   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 55/60
	I0103 20:08:29.809279   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 56/60
	I0103 20:08:30.810821   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 57/60
	I0103 20:08:31.812527   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 58/60
	I0103 20:08:32.814649   60976 main.go:141] libmachine: (no-preload-749210) Waiting for machine to stop 59/60
	I0103 20:08:33.815589   60976 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:08:33.815628   60976 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:08:33.817914   60976 out.go:177] 
	W0103 20:08:33.819454   60976 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0103 20:08:33.819466   60976 out.go:239] * 
	* 
	W0103 20:08:33.821590   60976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 20:08:33.823073   60976 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-749210 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-749210 -n no-preload-749210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-749210 -n no-preload-749210: exit status 3 (18.470259296s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:08:52.294888   61711 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.245:22: connect: no route to host
	E0103 20:08:52.294909   61711 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-749210" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-018788 --alsologtostderr -v=3
E0103 20:06:35.160489   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:40.281084   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:42.554697   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:42.559947   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:42.570205   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:42.590514   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:42.630845   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:42.711146   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:42.871667   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:43.192006   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:43.833111   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:45.113774   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:47.674961   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:06:50.522063   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:52.795711   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:07:03.036933   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:07:04.854852   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:07:11.003244   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-018788 --alsologtostderr -v=3: exit status 82 (2m1.126529507s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-018788"  ...
	* Stopping node "default-k8s-diff-port-018788"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:06:33.389022   61054 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:06:33.389305   61054 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:06:33.389317   61054 out.go:309] Setting ErrFile to fd 2...
	I0103 20:06:33.389322   61054 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:06:33.389576   61054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:06:33.389848   61054 out.go:303] Setting JSON to false
	I0103 20:06:33.389923   61054 mustload.go:65] Loading cluster: default-k8s-diff-port-018788
	I0103 20:06:33.390264   61054 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:06:33.390329   61054 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:06:33.390485   61054 mustload.go:65] Loading cluster: default-k8s-diff-port-018788
	I0103 20:06:33.390624   61054 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:06:33.390659   61054 stop.go:39] StopHost: default-k8s-diff-port-018788
	I0103 20:06:33.391089   61054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:06:33.391157   61054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:06:33.405175   61054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0103 20:06:33.405683   61054 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:06:33.406181   61054 main.go:141] libmachine: Using API Version  1
	I0103 20:06:33.406205   61054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:06:33.406644   61054 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:06:33.409230   61054 out.go:177] * Stopping node "default-k8s-diff-port-018788"  ...
	I0103 20:06:33.410647   61054 main.go:141] libmachine: Stopping "default-k8s-diff-port-018788"...
	I0103 20:06:33.410667   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:06:33.412277   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Stop
	I0103 20:06:33.415409   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 0/60
	I0103 20:06:34.416959   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 1/60
	I0103 20:06:35.418288   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 2/60
	I0103 20:06:36.419524   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 3/60
	I0103 20:06:37.420850   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 4/60
	I0103 20:06:38.422958   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 5/60
	I0103 20:06:39.424693   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 6/60
	I0103 20:06:40.426073   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 7/60
	I0103 20:06:41.427579   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 8/60
	I0103 20:06:42.429014   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 9/60
	I0103 20:06:43.431366   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 10/60
	I0103 20:06:44.433020   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 11/60
	I0103 20:06:45.434780   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 12/60
	I0103 20:06:46.436229   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 13/60
	I0103 20:06:47.438443   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 14/60
	I0103 20:06:48.440502   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 15/60
	I0103 20:06:49.442250   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 16/60
	I0103 20:06:50.443670   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 17/60
	I0103 20:06:51.445181   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 18/60
	I0103 20:06:52.446699   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 19/60
	I0103 20:06:53.449209   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 20/60
	I0103 20:06:54.450860   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 21/60
	I0103 20:06:55.452205   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 22/60
	I0103 20:06:56.453623   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 23/60
	I0103 20:06:57.455118   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 24/60
	I0103 20:06:58.457199   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 25/60
	I0103 20:06:59.458886   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 26/60
	I0103 20:07:00.460376   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 27/60
	I0103 20:07:01.461697   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 28/60
	I0103 20:07:02.463281   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 29/60
	I0103 20:07:03.465635   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 30/60
	I0103 20:07:04.467185   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 31/60
	I0103 20:07:05.468849   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 32/60
	I0103 20:07:06.470685   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 33/60
	I0103 20:07:07.472157   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 34/60
	I0103 20:07:08.474341   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 35/60
	I0103 20:07:09.476111   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 36/60
	I0103 20:07:10.477479   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 37/60
	I0103 20:07:11.479036   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 38/60
	I0103 20:07:12.480600   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 39/60
	I0103 20:07:13.482018   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 40/60
	I0103 20:07:14.483616   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 41/60
	I0103 20:07:15.485280   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 42/60
	I0103 20:07:16.486903   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 43/60
	I0103 20:07:17.488435   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 44/60
	I0103 20:07:18.490760   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 45/60
	I0103 20:07:19.492345   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 46/60
	I0103 20:07:20.494156   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 47/60
	I0103 20:07:21.495826   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 48/60
	I0103 20:07:22.497204   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 49/60
	I0103 20:07:23.499835   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 50/60
	I0103 20:07:24.501486   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 51/60
	I0103 20:07:25.503045   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 52/60
	I0103 20:07:26.504676   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 53/60
	I0103 20:07:27.506286   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 54/60
	I0103 20:07:28.508521   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 55/60
	I0103 20:07:29.510279   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 56/60
	I0103 20:07:30.511973   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 57/60
	I0103 20:07:31.513617   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 58/60
	I0103 20:07:32.515501   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 59/60
	I0103 20:07:33.516805   61054 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:07:33.516867   61054 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:07:33.516886   61054 retry.go:31] will retry after 808.612621ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:07:34.325735   61054 stop.go:39] StopHost: default-k8s-diff-port-018788
	I0103 20:07:34.326089   61054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:07:34.326126   61054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:07:34.340405   61054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
	I0103 20:07:34.340856   61054 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:07:34.341322   61054 main.go:141] libmachine: Using API Version  1
	I0103 20:07:34.341352   61054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:07:34.341684   61054 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:07:34.343801   61054 out.go:177] * Stopping node "default-k8s-diff-port-018788"  ...
	I0103 20:07:34.345439   61054 main.go:141] libmachine: Stopping "default-k8s-diff-port-018788"...
	I0103 20:07:34.345470   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:07:34.347203   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Stop
	I0103 20:07:34.350510   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 0/60
	I0103 20:07:35.352167   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 1/60
	I0103 20:07:36.353947   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 2/60
	I0103 20:07:37.355272   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 3/60
	I0103 20:07:38.357328   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 4/60
	I0103 20:07:39.358797   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 5/60
	I0103 20:07:40.360375   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 6/60
	I0103 20:07:41.361936   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 7/60
	I0103 20:07:42.363491   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 8/60
	I0103 20:07:43.364833   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 9/60
	I0103 20:07:44.366877   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 10/60
	I0103 20:07:45.368407   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 11/60
	I0103 20:07:46.370042   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 12/60
	I0103 20:07:47.371622   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 13/60
	I0103 20:07:48.373236   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 14/60
	I0103 20:07:49.374941   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 15/60
	I0103 20:07:50.376136   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 16/60
	I0103 20:07:51.377709   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 17/60
	I0103 20:07:52.379774   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 18/60
	I0103 20:07:53.381446   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 19/60
	I0103 20:07:54.383635   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 20/60
	I0103 20:07:55.385178   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 21/60
	I0103 20:07:56.386861   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 22/60
	I0103 20:07:57.388588   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 23/60
	I0103 20:07:58.390431   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 24/60
	I0103 20:07:59.391739   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 25/60
	I0103 20:08:00.393177   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 26/60
	I0103 20:08:01.394633   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 27/60
	I0103 20:08:02.396138   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 28/60
	I0103 20:08:03.397350   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 29/60
	I0103 20:08:04.398701   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 30/60
	I0103 20:08:05.399966   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 31/60
	I0103 20:08:06.401570   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 32/60
	I0103 20:08:07.402897   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 33/60
	I0103 20:08:08.404276   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 34/60
	I0103 20:08:09.406326   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 35/60
	I0103 20:08:10.407561   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 36/60
	I0103 20:08:11.409186   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 37/60
	I0103 20:08:12.410685   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 38/60
	I0103 20:08:13.412086   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 39/60
	I0103 20:08:14.414610   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 40/60
	I0103 20:08:15.416014   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 41/60
	I0103 20:08:16.417501   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 42/60
	I0103 20:08:17.419061   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 43/60
	I0103 20:08:18.420393   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 44/60
	I0103 20:08:19.421861   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 45/60
	I0103 20:08:20.423144   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 46/60
	I0103 20:08:21.424647   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 47/60
	I0103 20:08:22.426105   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 48/60
	I0103 20:08:23.427534   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 49/60
	I0103 20:08:24.429342   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 50/60
	I0103 20:08:25.430769   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 51/60
	I0103 20:08:26.432572   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 52/60
	I0103 20:08:27.434077   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 53/60
	I0103 20:08:28.435401   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 54/60
	I0103 20:08:29.437597   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 55/60
	I0103 20:08:30.439083   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 56/60
	I0103 20:08:31.440579   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 57/60
	I0103 20:08:32.441833   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 58/60
	I0103 20:08:33.443635   61054 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for machine to stop 59/60
	I0103 20:08:34.444567   61054 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:08:34.444619   61054 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:08:34.446830   61054 out.go:177] 
	W0103 20:08:34.448490   61054 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0103 20:08:34.448505   61054 out.go:239] * 
	* 
	W0103 20:08:34.450785   61054 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 20:08:34.452550   61054 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-018788 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
E0103 20:08:35.091224   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:37.652360   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:42.772876   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:49.669306   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:08:50.151155   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788: exit status 3 (18.608895627s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:08:53.062820   61741 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0103 20:08:53.062843   61741 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-018788" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-927922 -n old-k8s-version-927922
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-927922 -n old-k8s-version-927922: exit status 3 (3.166525531s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:07:43.782903   61291 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.12:22: connect: no route to host
	E0103 20:07:43.782931   61291 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.12:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-927922 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0103 20:07:48.228301   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-927922 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153366691s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.12:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-927922 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-927922 -n old-k8s-version-927922
E0103 20:07:51.964426   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-927922 -n old-k8s-version-927922: exit status 3 (3.062368643s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:07:52.998928   61369 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.12:22: connect: no route to host
	E0103 20:07:52.998960   61369 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.12:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-927922" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451331 -n embed-certs-451331
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451331 -n embed-certs-451331: exit status 3 (3.168369962s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:08:24.230920   61541 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.197:22: connect: no route to host
	E0103 20:08:24.230941   61541 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.197:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-451331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-451331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152573236s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.197:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-451331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451331 -n embed-certs-451331
E0103 20:08:32.532212   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:32.537535   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:32.547799   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:32.568311   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:32.608626   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:32.689028   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:32.849978   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:08:33.170808   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451331 -n embed-certs-451331: exit status 3 (3.062636466s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:08:33.446839   61635 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.197:22: connect: no route to host
	E0103 20:08:33.446859   61635 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.197:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-451331" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-749210 -n no-preload-749210
E0103 20:08:53.013866   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-749210 -n no-preload-749210: exit status 3 (3.167779492s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:08:55.462893   61804 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.245:22: connect: no route to host
	E0103 20:08:55.462913   61804 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.245:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-749210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-749210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154175042s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.245:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-749210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-749210 -n no-preload-749210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-749210 -n no-preload-749210: exit status 3 (3.061935182s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:09:04.678924   61928 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.245:22: connect: no route to host
	E0103 20:09:04.678945   61928 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-749210" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788: exit status 3 (3.167974201s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:08:56.230908   61834 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0103 20:08:56.230935   61834 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-018788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-018788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152743609s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-018788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788: exit status 3 (3.062951896s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:09:05.446909   61958 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0103 20:09:05.446938   61958 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-018788" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0103 20:14:37.136179   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:14:48.942267   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:15:48.654193   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:15:55.308483   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 20:16:30.038739   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:16:42.554200   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:17:18.357364   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 20:17:27.747912   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-927922 -n old-k8s-version-927922
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-03 20:23:32.355602254 +0000 UTC m=+5177.828179248
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-927922 -n old-k8s-version-927922
E0103 20:23:32.532793   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-927922 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-927922 logs -n 25: (1.581691747s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-719541 sudo cat                              | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo find                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo crio                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-719541                                       | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-350596 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | disable-driver-mounts-350596                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:06 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-927922        | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451331            | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-749210             | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018788  | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-927922             | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC | 03 Jan 24 20:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451331                 | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC | 03 Jan 24 20:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-749210                  | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018788       | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:09:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:09:05.502375   62050 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:09:05.502548   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502558   62050 out.go:309] Setting ErrFile to fd 2...
	I0103 20:09:05.502566   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502759   62050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:09:05.503330   62050 out.go:303] Setting JSON to false
	I0103 20:09:05.504222   62050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6693,"bootTime":1704305853,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 20:09:05.504283   62050 start.go:138] virtualization: kvm guest
	I0103 20:09:05.507002   62050 out.go:177] * [default-k8s-diff-port-018788] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 20:09:05.508642   62050 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:09:05.508667   62050 notify.go:220] Checking for updates...
	I0103 20:09:05.510296   62050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:09:05.511927   62050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:09:05.513487   62050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:09:05.515064   62050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 20:09:05.516515   62050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:09:05.518301   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:09:05.518774   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.518827   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.533730   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0103 20:09:05.534098   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.534667   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.534699   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.535027   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.535298   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.535543   62050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:09:05.535823   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.535855   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.549808   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I0103 20:09:05.550147   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.550708   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.550733   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.551041   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.551258   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.583981   62050 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 20:09:05.585560   62050 start.go:298] selected driver: kvm2
	I0103 20:09:05.585580   62050 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.585707   62050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:09:05.586411   62050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.586494   62050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 20:09:05.601346   62050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 20:09:05.601747   62050 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 20:09:05.601812   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:09:05.601828   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:09:05.601839   62050 start_flags.go:323] config:
	{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-01878
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.602011   62050 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.604007   62050 out.go:177] * Starting control plane node default-k8s-diff-port-018788 in cluster default-k8s-diff-port-018788
	I0103 20:09:03.174819   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:06.246788   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:04.840696   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:09:04.840826   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:09:04.840950   62015 cache.go:107] acquiring lock: {Name:mk76774936d94ce826f83ee0faaaf3557831e6bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.840994   62015 cache.go:107] acquiring lock: {Name:mk25b47a2b083e99837dbc206b0832b20d7da669 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841017   62015 cache.go:107] acquiring lock: {Name:mk0a26120b5274bc796f1ae286da54dda262a5a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841059   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0103 20:09:04.841064   62015 start.go:365] acquiring machines lock for no-preload-749210: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:04.841070   62015 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 128.344µs
	I0103 20:09:04.841078   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0103 20:09:04.841081   62015 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841085   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0103 20:09:04.840951   62015 cache.go:107] acquiring lock: {Name:mk372d2259ddc4c784d2a14a7416ba9b749d6f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841089   62015 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 97.811µs
	I0103 20:09:04.841093   62015 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 87.964µs
	I0103 20:09:04.841108   62015 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0103 20:09:04.841109   62015 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0103 20:09:04.841115   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 20:09:04.841052   62015 cache.go:107] acquiring lock: {Name:mk04d21d7cdef9332755ef804a44022ba9c4a8c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841129   62015 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 185.143µs
	I0103 20:09:04.841155   62015 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 20:09:04.841139   62015 cache.go:107] acquiring lock: {Name:mk5c34e1c9b00efde01e776962411ad1105596ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841183   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0103 20:09:04.841203   62015 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 176.832µs
	I0103 20:09:04.841212   62015 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0103 20:09:04.841400   62015 cache.go:107] acquiring lock: {Name:mk0ae9e390d74a93289bc4e45b5511dce57beeb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841216   62015 cache.go:107] acquiring lock: {Name:mkccb08ee6224be0e6786052f4bebc8d21ec8a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841614   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0103 20:09:04.841633   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0103 20:09:04.841675   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0103 20:09:04.841679   62015 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 497.325µs
	I0103 20:09:04.841672   62015 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 557.891µs
	I0103 20:09:04.841716   62015 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841696   62015 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 499.205µs
	I0103 20:09:04.841745   62015 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841706   62015 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841755   62015 cache.go:87] Successfully saved all images to host disk.
	I0103 20:09:05.605517   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:09:05.605574   62050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 20:09:05.605590   62050 cache.go:56] Caching tarball of preloaded images
	I0103 20:09:05.605669   62050 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 20:09:05.605681   62050 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:09:05.605787   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:09:05.605973   62050 start.go:365] acquiring machines lock for default-k8s-diff-port-018788: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:12.326805   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:15.398807   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:21.478760   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:24.550821   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:30.630841   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:33.702766   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:39.782732   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:42.854926   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:48.934815   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:52.006845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:58.086804   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:01.158903   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:07.238808   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:10.310897   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:16.390869   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:19.462833   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:25.542866   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:28.614753   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:34.694867   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:37.766876   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:43.846838   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:46.918843   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:52.998853   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:56.070822   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:02.150825   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:05.222884   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:11.302787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:14.374818   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:20.454810   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:23.526899   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:29.606842   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:32.678789   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:38.758787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:41.830855   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:47.910801   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:50.982868   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:57.062889   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:00.134834   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:06.214856   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:09.286845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:15.366787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:18.438756   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:24.518814   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:27.590887   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:30.594981   61676 start.go:369] acquired machines lock for "embed-certs-451331" in 3m56.986277612s
	I0103 20:12:30.595030   61676 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:30.595039   61676 fix.go:54] fixHost starting: 
	I0103 20:12:30.595434   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:30.595466   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:30.609917   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0103 20:12:30.610302   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:30.610819   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:12:30.610845   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:30.611166   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:30.611348   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:30.611486   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:12:30.613108   61676 fix.go:102] recreateIfNeeded on embed-certs-451331: state=Stopped err=<nil>
	I0103 20:12:30.613128   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	W0103 20:12:30.613291   61676 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:30.615194   61676 out.go:177] * Restarting existing kvm2 VM for "embed-certs-451331" ...
	I0103 20:12:30.592855   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:30.592889   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:12:30.594843   61400 machine.go:91] provisioned docker machine in 4m37.406324683s
	I0103 20:12:30.594886   61400 fix.go:56] fixHost completed within 4m37.42774841s
	I0103 20:12:30.594892   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 4m37.427764519s
	W0103 20:12:30.594913   61400 start.go:694] error starting host: provision: host is not running
	W0103 20:12:30.595005   61400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0103 20:12:30.595014   61400 start.go:709] Will try again in 5 seconds ...
	I0103 20:12:30.616365   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Start
	I0103 20:12:30.616513   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring networks are active...
	I0103 20:12:30.617380   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network default is active
	I0103 20:12:30.617718   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network mk-embed-certs-451331 is active
	I0103 20:12:30.618103   61676 main.go:141] libmachine: (embed-certs-451331) Getting domain xml...
	I0103 20:12:30.618735   61676 main.go:141] libmachine: (embed-certs-451331) Creating domain...
	I0103 20:12:31.839751   61676 main.go:141] libmachine: (embed-certs-451331) Waiting to get IP...
	I0103 20:12:31.840608   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:31.841035   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:31.841117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:31.841008   62575 retry.go:31] will retry after 303.323061ms: waiting for machine to come up
	I0103 20:12:32.146508   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.147005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.147037   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.146950   62575 retry.go:31] will retry after 240.92709ms: waiting for machine to come up
	I0103 20:12:32.389487   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.389931   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.389962   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.389887   62575 retry.go:31] will retry after 473.263026ms: waiting for machine to come up
	I0103 20:12:32.864624   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.865060   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.865082   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.865006   62575 retry.go:31] will retry after 473.373684ms: waiting for machine to come up
	I0103 20:12:33.339691   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.340156   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.340189   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.340098   62575 retry.go:31] will retry after 639.850669ms: waiting for machine to come up
	I0103 20:12:35.596669   61400 start.go:365] acquiring machines lock for old-k8s-version-927922: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:12:33.982104   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.982622   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.982655   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.982583   62575 retry.go:31] will retry after 589.282725ms: waiting for machine to come up
	I0103 20:12:34.573280   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:34.573692   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:34.573716   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:34.573639   62575 retry.go:31] will retry after 884.387817ms: waiting for machine to come up
	I0103 20:12:35.459819   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:35.460233   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:35.460287   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:35.460168   62575 retry.go:31] will retry after 1.326571684s: waiting for machine to come up
	I0103 20:12:36.788923   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:36.789429   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:36.789452   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:36.789395   62575 retry.go:31] will retry after 1.436230248s: waiting for machine to come up
	I0103 20:12:38.227994   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:38.228374   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:38.228397   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:38.228336   62575 retry.go:31] will retry after 2.127693351s: waiting for machine to come up
	I0103 20:12:40.358485   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:40.358968   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:40.358998   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:40.358912   62575 retry.go:31] will retry after 1.816116886s: waiting for machine to come up
	I0103 20:12:42.177782   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:42.178359   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:42.178390   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:42.178296   62575 retry.go:31] will retry after 3.199797073s: waiting for machine to come up
	I0103 20:12:45.381712   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:45.382053   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:45.382075   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:45.381991   62575 retry.go:31] will retry after 3.573315393s: waiting for machine to come up
	I0103 20:12:50.159164   62015 start.go:369] acquired machines lock for "no-preload-749210" in 3m45.318070652s
	I0103 20:12:50.159226   62015 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:50.159235   62015 fix.go:54] fixHost starting: 
	I0103 20:12:50.159649   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:50.159688   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:50.176573   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0103 20:12:50.176998   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:50.177504   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:12:50.177529   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:50.177925   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:50.178125   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:12:50.178297   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:12:50.179850   62015 fix.go:102] recreateIfNeeded on no-preload-749210: state=Stopped err=<nil>
	I0103 20:12:50.179873   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	W0103 20:12:50.180066   62015 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:50.182450   62015 out.go:177] * Restarting existing kvm2 VM for "no-preload-749210" ...
	I0103 20:12:48.959159   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.959637   61676 main.go:141] libmachine: (embed-certs-451331) Found IP for machine: 192.168.50.197
	I0103 20:12:48.959655   61676 main.go:141] libmachine: (embed-certs-451331) Reserving static IP address...
	I0103 20:12:48.959666   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has current primary IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.960051   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.960073   61676 main.go:141] libmachine: (embed-certs-451331) DBG | skip adding static IP to network mk-embed-certs-451331 - found existing host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"}
	I0103 20:12:48.960086   61676 main.go:141] libmachine: (embed-certs-451331) Reserved static IP address: 192.168.50.197
	I0103 20:12:48.960101   61676 main.go:141] libmachine: (embed-certs-451331) Waiting for SSH to be available...
	I0103 20:12:48.960117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Getting to WaitForSSH function...
	I0103 20:12:48.962160   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962443   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.962478   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962611   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH client type: external
	I0103 20:12:48.962631   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa (-rw-------)
	I0103 20:12:48.962661   61676 main.go:141] libmachine: (embed-certs-451331) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:12:48.962681   61676 main.go:141] libmachine: (embed-certs-451331) DBG | About to run SSH command:
	I0103 20:12:48.962718   61676 main.go:141] libmachine: (embed-certs-451331) DBG | exit 0
	I0103 20:12:49.058790   61676 main.go:141] libmachine: (embed-certs-451331) DBG | SSH cmd err, output: <nil>: 
	I0103 20:12:49.059176   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetConfigRaw
	I0103 20:12:49.059838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.062025   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062407   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.062440   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062697   61676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/config.json ...
	I0103 20:12:49.062878   61676 machine.go:88] provisioning docker machine ...
	I0103 20:12:49.062894   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.063097   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063258   61676 buildroot.go:166] provisioning hostname "embed-certs-451331"
	I0103 20:12:49.063278   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063423   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.065735   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066121   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.066161   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066328   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.066507   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066695   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066860   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.067065   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.067455   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.067469   61676 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-451331 && echo "embed-certs-451331" | sudo tee /etc/hostname
	I0103 20:12:49.210431   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-451331
	
	I0103 20:12:49.210465   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.213162   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213503   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.213573   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213682   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.213911   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214094   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214289   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.214449   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.214837   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.214856   61676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-451331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-451331/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-451331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:12:49.350098   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:49.350134   61676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:12:49.350158   61676 buildroot.go:174] setting up certificates
	I0103 20:12:49.350172   61676 provision.go:83] configureAuth start
	I0103 20:12:49.350188   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.350497   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.352947   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353356   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.353387   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353448   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.355701   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.356033   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356183   61676 provision.go:138] copyHostCerts
	I0103 20:12:49.356241   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:12:49.356254   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:12:49.356322   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:12:49.356413   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:12:49.356421   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:12:49.356446   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:12:49.356506   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:12:49.356513   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:12:49.356535   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:12:49.356587   61676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.embed-certs-451331 san=[192.168.50.197 192.168.50.197 localhost 127.0.0.1 minikube embed-certs-451331]
	I0103 20:12:49.413721   61676 provision.go:172] copyRemoteCerts
	I0103 20:12:49.413781   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:12:49.413804   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.416658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417143   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.417170   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417420   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.417617   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.417814   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.417977   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.510884   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:12:49.533465   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0103 20:12:49.554895   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:12:49.576069   61676 provision.go:86] duration metric: configureAuth took 225.882364ms
	I0103 20:12:49.576094   61676 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:12:49.576310   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:12:49.576387   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.579119   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579413   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.579461   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.579780   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.579968   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.580121   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.580271   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.580591   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.580615   61676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:12:49.883159   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:12:49.883188   61676 machine.go:91] provisioned docker machine in 820.299871ms
	I0103 20:12:49.883199   61676 start.go:300] post-start starting for "embed-certs-451331" (driver="kvm2")
	I0103 20:12:49.883212   61676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:12:49.883239   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.883565   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:12:49.883599   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.886365   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.886695   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886878   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.887091   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.887293   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.887468   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.985529   61676 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:12:49.989732   61676 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:12:49.989758   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:12:49.989820   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:12:49.989891   61676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:12:49.989981   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:12:49.999882   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:50.022936   61676 start.go:303] post-start completed in 139.710189ms
	I0103 20:12:50.022966   61676 fix.go:56] fixHost completed within 19.427926379s
	I0103 20:12:50.023002   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.025667   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.025940   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.025973   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.026212   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.026424   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.027074   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:50.027381   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:50.027393   61676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:12:50.159031   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312770.110466062
	
	I0103 20:12:50.159053   61676 fix.go:206] guest clock: 1704312770.110466062
	I0103 20:12:50.159061   61676 fix.go:219] Guest: 2024-01-03 20:12:50.110466062 +0000 UTC Remote: 2024-01-03 20:12:50.022969488 +0000 UTC m=+256.568741537 (delta=87.496574ms)
	I0103 20:12:50.159083   61676 fix.go:190] guest clock delta is within tolerance: 87.496574ms
	I0103 20:12:50.159089   61676 start.go:83] releasing machines lock for "embed-certs-451331", held for 19.564082089s
	I0103 20:12:50.159117   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.159421   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:50.162216   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162550   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.162577   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162762   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163248   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163433   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163532   61676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:12:50.163583   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.163644   61676 ssh_runner.go:195] Run: cat /version.json
	I0103 20:12:50.163671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.166588   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166753   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166957   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.166987   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167192   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167329   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.167358   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167362   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167500   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.167590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167684   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.167761   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167905   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.168096   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.298482   61676 ssh_runner.go:195] Run: systemctl --version
	I0103 20:12:50.304252   61676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:12:50.442709   61676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:12:50.448879   61676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:12:50.448959   61676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:12:50.467183   61676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:12:50.467203   61676 start.go:475] detecting cgroup driver to use...
	I0103 20:12:50.467269   61676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:12:50.482438   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:12:50.493931   61676 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:12:50.493997   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:12:50.506860   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:12:50.519279   61676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:12:50.627391   61676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:12:50.748160   61676 docker.go:219] disabling docker service ...
	I0103 20:12:50.748220   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:12:50.760970   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:12:50.772252   61676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:12:50.889707   61676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:12:51.003794   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:12:51.016226   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:12:51.032543   61676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:12:51.032600   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.042477   61676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:12:51.042559   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.053103   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.063469   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.073912   61676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:12:51.083314   61676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:12:51.092920   61676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:12:51.092969   61676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:12:51.106690   61676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:12:51.115815   61676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:12:51.230139   61676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:12:51.413184   61676 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:12:51.413315   61676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:12:51.417926   61676 start.go:543] Will wait 60s for crictl version
	I0103 20:12:51.417988   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:12:51.421507   61676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:12:51.465370   61676 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:12:51.465453   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.519590   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.582633   61676 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:12:51.583888   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:51.587068   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587442   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:51.587486   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587724   61676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0103 20:12:51.591798   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:51.602798   61676 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:12:51.602871   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:51.641736   61676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:12:51.641799   61676 ssh_runner.go:195] Run: which lz4
	I0103 20:12:51.645386   61676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:12:51.649168   61676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:12:51.649196   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:12:53.428537   61676 crio.go:444] Took 1.783185 seconds to copy over tarball
	I0103 20:12:53.428601   61676 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:12:50.183891   62015 main.go:141] libmachine: (no-preload-749210) Calling .Start
	I0103 20:12:50.184083   62015 main.go:141] libmachine: (no-preload-749210) Ensuring networks are active...
	I0103 20:12:50.184749   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network default is active
	I0103 20:12:50.185084   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network mk-no-preload-749210 is active
	I0103 20:12:50.185435   62015 main.go:141] libmachine: (no-preload-749210) Getting domain xml...
	I0103 20:12:50.186067   62015 main.go:141] libmachine: (no-preload-749210) Creating domain...
	I0103 20:12:51.468267   62015 main.go:141] libmachine: (no-preload-749210) Waiting to get IP...
	I0103 20:12:51.469108   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.469584   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.469664   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.469570   62702 retry.go:31] will retry after 254.191618ms: waiting for machine to come up
	I0103 20:12:51.724958   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.725657   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.725683   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.725609   62702 retry.go:31] will retry after 279.489548ms: waiting for machine to come up
	I0103 20:12:52.007176   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.007682   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.007713   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.007628   62702 retry.go:31] will retry after 422.96552ms: waiting for machine to come up
	I0103 20:12:52.432345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.432873   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.432912   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.432844   62702 retry.go:31] will retry after 561.295375ms: waiting for machine to come up
	I0103 20:12:52.995438   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.995929   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.995963   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.995878   62702 retry.go:31] will retry after 547.962782ms: waiting for machine to come up
	I0103 20:12:53.545924   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:53.546473   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:53.546558   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:53.546453   62702 retry.go:31] will retry after 927.631327ms: waiting for machine to come up
	I0103 20:12:54.475549   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:54.476000   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:54.476046   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:54.475945   62702 retry.go:31] will retry after 880.192703ms: waiting for machine to come up
	I0103 20:12:56.224357   61676 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.795734066s)
	I0103 20:12:56.224386   61676 crio.go:451] Took 2.795820 seconds to extract the tarball
	I0103 20:12:56.224406   61676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:12:56.266955   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:56.318766   61676 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:12:56.318789   61676 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:12:56.318871   61676 ssh_runner.go:195] Run: crio config
	I0103 20:12:56.378376   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:12:56.378401   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:12:56.378423   61676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:12:56.378451   61676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.197 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-451331 NodeName:embed-certs-451331 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:12:56.378619   61676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-451331"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:12:56.378714   61676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-451331 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:12:56.378777   61676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:12:56.387967   61676 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:12:56.388037   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:12:56.396000   61676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0103 20:12:56.411880   61676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:12:56.427567   61676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0103 20:12:56.443342   61676 ssh_runner.go:195] Run: grep 192.168.50.197	control-plane.minikube.internal$ /etc/hosts
	I0103 20:12:56.446991   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:56.458659   61676 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331 for IP: 192.168.50.197
	I0103 20:12:56.458696   61676 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:12:56.458844   61676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:12:56.458904   61676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:12:56.459010   61676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/client.key
	I0103 20:12:56.459092   61676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key.d719e12a
	I0103 20:12:56.459159   61676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key
	I0103 20:12:56.459299   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:12:56.459341   61676 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:12:56.459358   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:12:56.459400   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:12:56.459434   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:12:56.459466   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:12:56.459522   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:56.460408   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:12:56.481997   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:12:56.504016   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:12:56.526477   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:12:56.548471   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:12:56.570763   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:12:56.592910   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:12:56.617765   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:12:56.646025   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:12:56.668629   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:12:56.690927   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:12:56.712067   61676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:12:56.727773   61676 ssh_runner.go:195] Run: openssl version
	I0103 20:12:56.733000   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:12:56.742921   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747499   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747562   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.752732   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:12:56.762510   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:12:56.772401   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777123   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777180   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.782490   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:12:56.793745   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:12:56.805156   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809897   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809954   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.815432   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:12:56.826498   61676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:12:56.831012   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:12:56.837150   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:12:56.843256   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:12:56.849182   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:12:56.854882   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:12:56.862018   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:12:56.867863   61676 kubeadm.go:404] StartCluster: {Name:embed-certs-451331 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:12:56.867982   61676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:12:56.868029   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:12:56.909417   61676 cri.go:89] found id: ""
	I0103 20:12:56.909523   61676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:12:56.919487   61676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:12:56.919515   61676 kubeadm.go:636] restartCluster start
	I0103 20:12:56.919568   61676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:12:56.929137   61676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:56.930326   61676 kubeconfig.go:92] found "embed-certs-451331" server: "https://192.168.50.197:8443"
	I0103 20:12:56.932682   61676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:12:56.941846   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:56.941909   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:56.953616   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.442188   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.442281   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.458303   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.942905   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.942988   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.955860   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:58.442326   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.442420   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.454294   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:55.357897   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:55.358462   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:55.358492   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:55.358429   62702 retry.go:31] will retry after 1.158958207s: waiting for machine to come up
	I0103 20:12:56.518837   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:56.519260   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:56.519306   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:56.519224   62702 retry.go:31] will retry after 1.620553071s: waiting for machine to come up
	I0103 20:12:58.141980   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:58.142505   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:58.142549   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:58.142454   62702 retry.go:31] will retry after 1.525068593s: waiting for machine to come up
	I0103 20:12:59.670380   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:59.670880   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:59.670909   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:59.670827   62702 retry.go:31] will retry after 1.772431181s: waiting for machine to come up
	I0103 20:12:58.942887   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.942975   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.956781   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.442313   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.442402   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.455837   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.942355   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.942439   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.954326   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.441870   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.441960   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.454004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.941882   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.941995   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.958004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.442573   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.442664   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.458604   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.942062   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.942170   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.958396   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.442928   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.443027   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.456612   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.941943   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.942056   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.953939   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:03.442552   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.442633   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.454840   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.445221   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:01.445608   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:01.445647   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:01.445565   62702 retry.go:31] will retry after 2.830747633s: waiting for machine to come up
	I0103 20:13:04.279514   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:04.279996   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:04.280020   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:04.279963   62702 retry.go:31] will retry after 4.03880385s: waiting for machine to come up
	I0103 20:13:03.942687   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.942774   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.954714   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.442265   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.442357   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.454216   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.942877   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.942952   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.954944   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.442467   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.442596   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.454305   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.942383   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.942468   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.954074   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.442723   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.442811   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.454629   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.942200   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.942283   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.953799   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.953829   61676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:06.953836   61676 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:06.953845   61676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:06.953904   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:06.989109   61676 cri.go:89] found id: ""
	I0103 20:13:06.989214   61676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:07.004822   61676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:07.014393   61676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:07.014454   61676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023669   61676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023691   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.139277   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.626388   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.814648   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.901750   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.962623   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:07.962710   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.463820   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.322801   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323160   62015 main.go:141] libmachine: (no-preload-749210) Found IP for machine: 192.168.61.245
	I0103 20:13:08.323203   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has current primary IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323222   62015 main.go:141] libmachine: (no-preload-749210) Reserving static IP address...
	I0103 20:13:08.323600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.323632   62015 main.go:141] libmachine: (no-preload-749210) Reserved static IP address: 192.168.61.245
	I0103 20:13:08.323664   62015 main.go:141] libmachine: (no-preload-749210) DBG | skip adding static IP to network mk-no-preload-749210 - found existing host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"}
	I0103 20:13:08.323684   62015 main.go:141] libmachine: (no-preload-749210) DBG | Getting to WaitForSSH function...
	I0103 20:13:08.323698   62015 main.go:141] libmachine: (no-preload-749210) Waiting for SSH to be available...
	I0103 20:13:08.325529   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325831   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.325863   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325949   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH client type: external
	I0103 20:13:08.325977   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa (-rw-------)
	I0103 20:13:08.326013   62015 main.go:141] libmachine: (no-preload-749210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:08.326030   62015 main.go:141] libmachine: (no-preload-749210) DBG | About to run SSH command:
	I0103 20:13:08.326053   62015 main.go:141] libmachine: (no-preload-749210) DBG | exit 0
	I0103 20:13:08.418368   62015 main.go:141] libmachine: (no-preload-749210) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:08.418718   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetConfigRaw
	I0103 20:13:08.419464   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.421838   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422172   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.422199   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422460   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:13:08.422680   62015 machine.go:88] provisioning docker machine ...
	I0103 20:13:08.422702   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:08.422883   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423027   62015 buildroot.go:166] provisioning hostname "no-preload-749210"
	I0103 20:13:08.423047   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423153   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.425105   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425377   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.425408   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425583   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.425734   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425869   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425987   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.426160   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.426488   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.426501   62015 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-749210 && echo "no-preload-749210" | sudo tee /etc/hostname
	I0103 20:13:08.579862   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-749210
	
	I0103 20:13:08.579892   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.583166   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.583635   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583828   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.584039   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584225   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584391   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.584593   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.584928   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.584954   62015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-749210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-749210/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-749210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:08.729661   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:08.729697   62015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:08.729738   62015 buildroot.go:174] setting up certificates
	I0103 20:13:08.729759   62015 provision.go:83] configureAuth start
	I0103 20:13:08.729776   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.730101   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.733282   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733694   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.733728   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733868   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.736223   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736557   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.736589   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736763   62015 provision.go:138] copyHostCerts
	I0103 20:13:08.736830   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:08.736847   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:08.736913   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:08.737035   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:08.737047   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:08.737077   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:08.737177   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:08.737188   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:08.737218   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:08.737295   62015 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.no-preload-749210 san=[192.168.61.245 192.168.61.245 localhost 127.0.0.1 minikube no-preload-749210]
	I0103 20:13:09.018604   62015 provision.go:172] copyRemoteCerts
	I0103 20:13:09.018662   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:09.018684   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.021339   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021729   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.021777   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.022068   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.022220   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.022405   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.120023   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0103 20:13:09.143242   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:09.166206   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:09.192425   62015 provision.go:86] duration metric: configureAuth took 462.649611ms
	I0103 20:13:09.192457   62015 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:09.192678   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:09.192770   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.195193   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195594   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.195633   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.196100   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196272   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196437   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.196637   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.197028   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.197048   62015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:09.528890   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:09.528915   62015 machine.go:91] provisioned docker machine in 1.106221183s
	I0103 20:13:09.528924   62015 start.go:300] post-start starting for "no-preload-749210" (driver="kvm2")
	I0103 20:13:09.528949   62015 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:09.528966   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.529337   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:09.529372   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.532679   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533032   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.533063   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533262   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.533490   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.533675   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.533841   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.632949   62015 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:09.638382   62015 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:09.638421   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:09.638502   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:09.638617   62015 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:09.638744   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:09.650407   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:09.672528   62015 start.go:303] post-start completed in 143.577643ms
	I0103 20:13:09.672560   62015 fix.go:56] fixHost completed within 19.513324819s
	I0103 20:13:09.672585   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.675037   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675398   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.675430   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675587   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.675811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.675963   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.676112   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.676294   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.676674   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.676690   62015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:09.811720   62050 start.go:369] acquired machines lock for "default-k8s-diff-port-018788" in 4m4.205717121s
	I0103 20:13:09.811786   62050 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:09.811797   62050 fix.go:54] fixHost starting: 
	I0103 20:13:09.812213   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:09.812257   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:09.831972   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0103 20:13:09.832420   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:09.832973   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:13:09.833004   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:09.833345   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:09.833505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:09.833637   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:13:09.835476   62050 fix.go:102] recreateIfNeeded on default-k8s-diff-port-018788: state=Stopped err=<nil>
	I0103 20:13:09.835520   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	W0103 20:13:09.835689   62050 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:09.837499   62050 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018788" ...
	I0103 20:13:09.838938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Start
	I0103 20:13:09.839117   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring networks are active...
	I0103 20:13:09.839888   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network default is active
	I0103 20:13:09.840347   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network mk-default-k8s-diff-port-018788 is active
	I0103 20:13:09.840765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Getting domain xml...
	I0103 20:13:09.841599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Creating domain...
	I0103 20:13:09.811571   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312789.764323206
	
	I0103 20:13:09.811601   62015 fix.go:206] guest clock: 1704312789.764323206
	I0103 20:13:09.811611   62015 fix.go:219] Guest: 2024-01-03 20:13:09.764323206 +0000 UTC Remote: 2024-01-03 20:13:09.672564299 +0000 UTC m=+244.986151230 (delta=91.758907ms)
	I0103 20:13:09.811636   62015 fix.go:190] guest clock delta is within tolerance: 91.758907ms
	I0103 20:13:09.811642   62015 start.go:83] releasing machines lock for "no-preload-749210", held for 19.652439302s
	I0103 20:13:09.811678   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.811949   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:09.815012   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815391   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.815429   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815641   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816177   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816363   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816471   62015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:09.816509   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.816620   62015 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:09.816646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.819652   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.819909   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820058   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820088   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820319   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820377   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820581   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820753   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.820822   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820910   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.821007   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.821131   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.949119   62015 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:09.956247   62015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:10.116715   62015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:10.122512   62015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:10.122640   62015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:10.142239   62015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:10.142265   62015 start.go:475] detecting cgroup driver to use...
	I0103 20:13:10.142336   62015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:10.159473   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:10.175492   62015 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:10.175555   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:10.191974   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:10.208639   62015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:10.343228   62015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:10.457642   62015 docker.go:219] disabling docker service ...
	I0103 20:13:10.457720   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:10.475117   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:10.491265   62015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:10.613064   62015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:10.741969   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:10.755923   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:10.775483   62015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:10.775550   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.785489   62015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:10.785557   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.795303   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.804763   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.814559   62015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:10.824431   62015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:10.833193   62015 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:10.833273   62015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:10.850446   62015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:10.861775   62015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:11.021577   62015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:11.217675   62015 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:11.217748   62015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:11.222475   62015 start.go:543] Will wait 60s for crictl version
	I0103 20:13:11.222552   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.226128   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:11.266681   62015 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:11.266775   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.313142   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.358396   62015 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0103 20:13:08.963472   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.462836   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.963771   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.991718   61676 api_server.go:72] duration metric: took 2.029094062s to wait for apiserver process to appear ...
	I0103 20:13:09.991748   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:09.991769   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:09.992264   61676 api_server.go:269] stopped: https://192.168.50.197:8443/healthz: Get "https://192.168.50.197:8443/healthz": dial tcp 192.168.50.197:8443: connect: connection refused
	I0103 20:13:10.491803   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:11.359808   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:11.363074   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363434   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:11.363465   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363695   62015 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:11.367689   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:11.378693   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:13:11.378746   62015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:11.416544   62015 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0103 20:13:11.416570   62015 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:11.416642   62015 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.416698   62015 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.416724   62015 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.416699   62015 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.416929   62015 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0103 20:13:11.416671   62015 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.417054   62015 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.417093   62015 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.418600   62015 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.418621   62015 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.418630   62015 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0103 20:13:11.418646   62015 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.418661   62015 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.418675   62015 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.418685   62015 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.418697   62015 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.635223   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.662007   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.668522   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.671471   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.672069   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.685216   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0103 20:13:11.687462   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.716775   62015 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0103 20:13:11.716825   62015 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.716882   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.762358   62015 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0103 20:13:11.762394   62015 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.762463   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846225   62015 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0103 20:13:11.846268   62015 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.846317   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846432   62015 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0103 20:13:11.846473   62015 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.846529   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846515   62015 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0103 20:13:11.846655   62015 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.846711   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956577   62015 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0103 20:13:11.956659   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.956689   62015 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.956746   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.956760   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956782   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.956820   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.956873   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:12.064715   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:12.064764   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0103 20:13:12.064720   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.064856   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:12.064903   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.068647   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068685   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068752   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068767   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.068771   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068841   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.077600   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0103 20:13:12.077622   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077682   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077798   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0103 20:13:12.109729   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109778   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109838   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109927   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0103 20:13:12.110020   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:12.237011   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279507   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.201800359s)
	I0103 20:13:14.279592   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0103 20:13:14.279606   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.169553787s)
	I0103 20:13:14.279641   62015 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279646   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0103 20:13:14.279645   62015 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.042604307s)
	I0103 20:13:14.279725   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279726   62015 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0103 20:13:14.279760   62015 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279802   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:14.285860   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.246503   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting to get IP...
	I0103 20:13:11.247669   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248203   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248301   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.248165   62835 retry.go:31] will retry after 292.358185ms: waiting for machine to come up
	I0103 20:13:11.541836   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542224   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.542168   62835 retry.go:31] will retry after 370.634511ms: waiting for machine to come up
	I0103 20:13:11.914890   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915372   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915403   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.915330   62835 retry.go:31] will retry after 304.80922ms: waiting for machine to come up
	I0103 20:13:12.221826   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222289   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.222232   62835 retry.go:31] will retry after 534.177843ms: waiting for machine to come up
	I0103 20:13:12.757904   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758422   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.758334   62835 retry.go:31] will retry after 749.166369ms: waiting for machine to come up
	I0103 20:13:13.509343   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:13.509854   62835 retry.go:31] will retry after 716.215015ms: waiting for machine to come up
	I0103 20:13:14.227886   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228388   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228414   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:14.228338   62835 retry.go:31] will retry after 1.095458606s: waiting for machine to come up
	I0103 20:13:15.324880   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325299   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325332   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:15.325250   62835 retry.go:31] will retry after 1.266878415s: waiting for machine to come up
	I0103 20:13:14.427035   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.427077   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.427119   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.462068   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.462115   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.492283   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.500354   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.500391   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:14.991910   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.997522   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.997550   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.492157   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:15.500340   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:15.500377   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.992158   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:16.002940   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:13:16.020171   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:16.020205   61676 api_server.go:131] duration metric: took 6.028448633s to wait for apiserver health ...
	I0103 20:13:16.020216   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:13:16.020226   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:16.022596   61676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:16.024514   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:16.064582   61676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:16.113727   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:16.124984   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:16.125031   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:16.125044   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:16.125061   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:16.125072   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:16.125086   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:16.125097   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:16.125111   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:16.125125   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:16.125140   61676 system_pods.go:74] duration metric: took 11.390906ms to wait for pod list to return data ...
	I0103 20:13:16.125152   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:16.133036   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:16.133072   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:16.133086   61676 node_conditions.go:105] duration metric: took 7.928329ms to run NodePressure ...
	I0103 20:13:16.133109   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:16.519151   61676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530359   61676 kubeadm.go:787] kubelet initialised
	I0103 20:13:16.530380   61676 kubeadm.go:788] duration metric: took 11.203465ms waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530388   61676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:16.540797   61676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.550417   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550457   61676 pod_ready.go:81] duration metric: took 9.627239ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.550475   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550486   61676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.557664   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557693   61676 pod_ready.go:81] duration metric: took 7.191907ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.557705   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557721   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.566973   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567007   61676 pod_ready.go:81] duration metric: took 9.268451ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.567019   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567027   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.587777   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587811   61676 pod_ready.go:81] duration metric: took 20.769874ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.587825   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587832   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.923613   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923643   61676 pod_ready.go:81] duration metric: took 335.80096ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.923655   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923663   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.323875   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323911   61676 pod_ready.go:81] duration metric: took 400.238515ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.323922   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323931   61676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.724694   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724727   61676 pod_ready.go:81] duration metric: took 400.785148ms waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.724741   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724750   61676 pod_ready.go:38] duration metric: took 1.194352759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:17.724774   61676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:17.754724   61676 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:17.754762   61676 kubeadm.go:640] restartCluster took 20.835238159s
	I0103 20:13:17.754774   61676 kubeadm.go:406] StartCluster complete in 20.886921594s
	I0103 20:13:17.754794   61676 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.754875   61676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:17.757638   61676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.759852   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:13:17.759948   61676 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:17.760022   61676 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-451331"
	I0103 20:13:17.760049   61676 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-451331"
	W0103 20:13:17.760060   61676 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:17.760105   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.760154   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:17.760202   61676 addons.go:69] Setting default-storageclass=true in profile "embed-certs-451331"
	I0103 20:13:17.760227   61676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-451331"
	I0103 20:13:17.760525   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760553   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760595   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760619   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760814   61676 addons.go:69] Setting metrics-server=true in profile "embed-certs-451331"
	I0103 20:13:17.760869   61676 addons.go:237] Setting addon metrics-server=true in "embed-certs-451331"
	W0103 20:13:17.760887   61676 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:17.760949   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.761311   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.761367   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.778350   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0103 20:13:17.778603   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0103 20:13:17.778840   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.778947   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.779349   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779369   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779496   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779506   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779894   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.779936   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.780390   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0103 20:13:17.780507   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780528   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.780892   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780933   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.781532   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.782012   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.782030   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.782393   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.782580   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.786209   61676 addons.go:237] Setting addon default-storageclass=true in "embed-certs-451331"
	W0103 20:13:17.786231   61676 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:17.786264   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.786730   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.786761   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.796538   61676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-451331" context rescaled to 1 replicas
	I0103 20:13:17.796579   61676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:17.798616   61676 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:17.800702   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:17.799744   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0103 20:13:17.801004   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0103 20:13:17.801125   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.801622   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.801643   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.801967   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.802456   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.804195   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.804537   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.804683   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.804700   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.806577   61676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:17.805108   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.807681   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0103 20:13:17.808340   61676 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:17.808354   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:17.808371   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.808513   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.809005   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.809510   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.809529   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.809978   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.810778   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.810822   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.812250   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812607   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.812629   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812892   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.812970   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.813069   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.815321   61676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:17.813342   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.817289   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:17.817308   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:17.817336   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.817473   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.820418   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.820892   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.820920   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.821168   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.821350   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.821468   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.821597   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.829857   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0103 20:13:17.830343   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.830847   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.830869   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.831278   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.831432   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.833351   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.833678   61676 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:17.833695   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:17.833714   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.837454   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837708   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.837730   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837975   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.838211   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.838384   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.838534   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:18.036885   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:18.097340   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:18.099953   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:18.099982   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:18.242823   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:18.242847   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:18.309930   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:18.309959   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:18.321992   61676 node_ready.go:35] waiting up to 6m0s for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:18.322077   61676 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:18.366727   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:16.441666   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.161911946s)
	I0103 20:13:16.441698   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0103 20:13:16.441720   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441740   62015 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.155838517s)
	I0103 20:13:16.441767   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441855   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0103 20:13:16.441964   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:20.073248   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.975867864s)
	I0103 20:13:20.073318   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073383   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073265   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.03634078s)
	I0103 20:13:20.073419   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.706641739s)
	I0103 20:13:20.073466   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073490   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073489   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073553   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073744   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073759   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073775   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073786   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073878   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073905   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073935   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073938   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073980   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073992   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074016   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074036   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074073   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.074086   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074309   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074369   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074428   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074476   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074454   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074506   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074558   61676 addons.go:473] Verifying addon metrics-server=true in "embed-certs-451331"
	I0103 20:13:20.077560   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.077613   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.077653   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.088401   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.088441   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.088845   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.090413   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.090439   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.092641   61676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0103 20:13:16.593786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594320   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594352   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:16.594229   62835 retry.go:31] will retry after 1.232411416s: waiting for machine to come up
	I0103 20:13:17.828286   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832049   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832078   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:17.828787   62835 retry.go:31] will retry after 2.020753248s: waiting for machine to come up
	I0103 20:13:19.851119   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851683   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:19.851595   62835 retry.go:31] will retry after 2.720330873s: waiting for machine to come up
	I0103 20:13:20.094375   61676 addons.go:508] enable addons completed in 2.334425533s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0103 20:13:20.325950   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:22.327709   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:19.820972   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.379182556s)
	I0103 20:13:19.821009   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0103 20:13:19.821032   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.820976   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.378974193s)
	I0103 20:13:19.821081   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.821092   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0103 20:13:21.294764   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.47365805s)
	I0103 20:13:21.294796   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0103 20:13:21.294826   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:21.294879   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:24.067996   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.773083678s)
	I0103 20:13:24.068031   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0103 20:13:24.068071   62015 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:24.068131   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:22.573532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573959   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:22.573882   62835 retry.go:31] will retry after 2.869192362s: waiting for machine to come up
	I0103 20:13:25.444272   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444774   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444801   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:25.444710   62835 retry.go:31] will retry after 3.61848561s: waiting for machine to come up
	I0103 20:13:24.327795   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:24.831015   61676 node_ready.go:49] node "embed-certs-451331" has status "Ready":"True"
	I0103 20:13:24.831037   61676 node_ready.go:38] duration metric: took 6.509012992s waiting for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:24.831046   61676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:24.838244   61676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345945   61676 pod_ready.go:92] pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.345980   61676 pod_ready.go:81] duration metric: took 507.709108ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345991   61676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352763   61676 pod_ready.go:92] pod "etcd-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.352798   61676 pod_ready.go:81] duration metric: took 6.794419ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352812   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359491   61676 pod_ready.go:92] pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.359533   61676 pod_ready.go:81] duration metric: took 6.711829ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359547   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867866   61676 pod_ready.go:92] pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.867898   61676 pod_ready.go:81] duration metric: took 508.341809ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867912   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026106   61676 pod_ready.go:92] pod "kube-proxy-fsnb9" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.026140   61676 pod_ready.go:81] duration metric: took 158.216243ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026153   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428480   61676 pod_ready.go:92] pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.428506   61676 pod_ready.go:81] duration metric: took 402.345241ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428525   61676 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:28.438138   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:27.768745   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.700590535s)
	I0103 20:13:27.768774   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0103 20:13:27.768797   62015 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:27.768833   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:28.718165   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0103 20:13:28.718231   62015 cache_images.go:123] Successfully loaded all cached images
	I0103 20:13:28.718239   62015 cache_images.go:92] LoadImages completed in 17.301651166s
	I0103 20:13:28.718342   62015 ssh_runner.go:195] Run: crio config
	I0103 20:13:28.770786   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:28.770813   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:28.770838   62015 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:28.770862   62015 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.245 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-749210 NodeName:no-preload-749210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:28.771031   62015 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-749210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:28.771103   62015 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-749210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:13:28.771163   62015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 20:13:28.780756   62015 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:28.780834   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:28.789160   62015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0103 20:13:28.804638   62015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 20:13:28.820113   62015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0103 20:13:28.835707   62015 ssh_runner.go:195] Run: grep 192.168.61.245	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:28.839456   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:28.850530   62015 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210 for IP: 192.168.61.245
	I0103 20:13:28.850581   62015 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:28.850730   62015 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:28.850770   62015 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:28.850833   62015 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.key
	I0103 20:13:28.850886   62015 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key.5dd805e0
	I0103 20:13:28.850922   62015 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key
	I0103 20:13:28.851054   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:28.851081   62015 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:28.851093   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:28.851117   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:28.851139   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:28.851168   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:28.851210   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:28.851832   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:28.874236   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:13:28.896624   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:28.919016   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:13:28.941159   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:28.963311   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:28.985568   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:29.007709   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:29.030188   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:29.052316   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:29.076761   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:29.101462   62015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:29.118605   62015 ssh_runner.go:195] Run: openssl version
	I0103 20:13:29.124144   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:29.133148   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137750   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137809   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.143321   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:29.152302   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:29.161551   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166396   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166457   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.173179   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:29.184167   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:29.194158   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198763   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198836   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.204516   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:29.214529   62015 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:29.218834   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:29.225036   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:29.231166   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:29.237200   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:29.243158   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:29.249694   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:29.255582   62015 kubeadm.go:404] StartCluster: {Name:no-preload-749210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:29.255672   62015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:29.255758   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:29.299249   62015 cri.go:89] found id: ""
	I0103 20:13:29.299346   62015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:29.311210   62015 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:29.311227   62015 kubeadm.go:636] restartCluster start
	I0103 20:13:29.311271   62015 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:29.320430   62015 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:29.321471   62015 kubeconfig.go:92] found "no-preload-749210" server: "https://192.168.61.245:8443"
	I0103 20:13:29.324643   62015 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:29.333237   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.333300   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.345156   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.219284   61400 start.go:369] acquired machines lock for "old-k8s-version-927922" in 54.622555379s
	I0103 20:13:30.219352   61400 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:30.219364   61400 fix.go:54] fixHost starting: 
	I0103 20:13:30.219739   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:30.219770   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:30.235529   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0103 20:13:30.235926   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:30.236537   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:13:30.236562   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:30.236911   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:30.237121   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:30.237293   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:13:30.238979   61400 fix.go:102] recreateIfNeeded on old-k8s-version-927922: state=Stopped err=<nil>
	I0103 20:13:30.239006   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	W0103 20:13:30.239155   61400 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:30.241210   61400 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-927922" ...
	I0103 20:13:29.067586   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068030   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Found IP for machine: 192.168.39.139
	I0103 20:13:29.068048   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserving static IP address...
	I0103 20:13:29.068090   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has current primary IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.068532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | skip adding static IP to network mk-default-k8s-diff-port-018788 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"}
	I0103 20:13:29.068549   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserved static IP address: 192.168.39.139
	I0103 20:13:29.068571   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for SSH to be available...
	I0103 20:13:29.068608   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Getting to WaitForSSH function...
	I0103 20:13:29.071139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071587   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.071620   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071779   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH client type: external
	I0103 20:13:29.071810   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa (-rw-------)
	I0103 20:13:29.071858   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:29.071879   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | About to run SSH command:
	I0103 20:13:29.071896   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | exit 0
	I0103 20:13:29.166962   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:29.167365   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetConfigRaw
	I0103 20:13:29.167989   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.170671   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171052   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.171092   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171325   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:13:29.171564   62050 machine.go:88] provisioning docker machine ...
	I0103 20:13:29.171589   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.171866   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172058   62050 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018788"
	I0103 20:13:29.172084   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172253   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.175261   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175626   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.175660   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175749   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.175943   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176219   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176392   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.176611   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.177083   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.177105   62050 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018788 && echo "default-k8s-diff-port-018788" | sudo tee /etc/hostname
	I0103 20:13:29.304876   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018788
	
	I0103 20:13:29.304915   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.307645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308124   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.308190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.308619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308799   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308997   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.309177   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.309652   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.309682   62050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018788/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:29.431479   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:29.431517   62050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:29.431555   62050 buildroot.go:174] setting up certificates
	I0103 20:13:29.431569   62050 provision.go:83] configureAuth start
	I0103 20:13:29.431582   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.431900   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.435012   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435482   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.435517   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435638   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.437865   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438267   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.438303   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438388   62050 provision.go:138] copyHostCerts
	I0103 20:13:29.438448   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:29.438461   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:29.438527   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:29.438625   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:29.438633   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:29.438653   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:29.438713   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:29.438720   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:29.438738   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:29.438787   62050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018788 san=[192.168.39.139 192.168.39.139 localhost 127.0.0.1 minikube default-k8s-diff-port-018788]
	I0103 20:13:29.494476   62050 provision.go:172] copyRemoteCerts
	I0103 20:13:29.494562   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:29.494590   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.497330   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497597   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.497623   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.497956   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.498139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.498268   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:29.583531   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:29.605944   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0103 20:13:29.630747   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:29.656325   62050 provision.go:86] duration metric: configureAuth took 224.741883ms
	I0103 20:13:29.656355   62050 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:29.656525   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:29.656619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.659656   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.660213   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660434   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.660643   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.660864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.661019   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.661217   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.661571   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.661588   62050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:29.970938   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:29.970966   62050 machine.go:91] provisioned docker machine in 799.385733ms
	I0103 20:13:29.970975   62050 start.go:300] post-start starting for "default-k8s-diff-port-018788" (driver="kvm2")
	I0103 20:13:29.970985   62050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:29.971007   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.971387   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:29.971418   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.974114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974487   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.974562   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974706   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.974894   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.975075   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.975227   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.061987   62050 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:30.066591   62050 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:30.066620   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:30.066704   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:30.066795   62050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:30.066899   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:30.076755   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:30.099740   62050 start.go:303] post-start completed in 128.750887ms
	I0103 20:13:30.099763   62050 fix.go:56] fixHost completed within 20.287967183s
	I0103 20:13:30.099782   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.102744   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.103177   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103409   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.103633   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.103846   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.104080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.104308   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:30.104680   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:30.104696   62050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:30.219120   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312810.161605674
	
	I0103 20:13:30.219145   62050 fix.go:206] guest clock: 1704312810.161605674
	I0103 20:13:30.219154   62050 fix.go:219] Guest: 2024-01-03 20:13:30.161605674 +0000 UTC Remote: 2024-01-03 20:13:30.099767061 +0000 UTC m=+264.645600185 (delta=61.838613ms)
	I0103 20:13:30.219191   62050 fix.go:190] guest clock delta is within tolerance: 61.838613ms
	I0103 20:13:30.219202   62050 start.go:83] releasing machines lock for "default-k8s-diff-port-018788", held for 20.407440359s
	I0103 20:13:30.219230   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.219551   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:30.222200   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222616   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.222650   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222811   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223411   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223568   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223643   62050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:30.223686   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.223940   62050 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:30.223970   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.226394   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226746   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.226777   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227274   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.227443   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227446   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227567   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227595   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.227739   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227972   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.315855   62050 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:30.359117   62050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:30.499200   62050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:30.505296   62050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:30.505768   62050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:30.520032   62050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:30.520059   62050 start.go:475] detecting cgroup driver to use...
	I0103 20:13:30.520146   62050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:30.532684   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:30.545152   62050 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:30.545222   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:30.558066   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:30.570999   62050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:30.682484   62050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:30.802094   62050 docker.go:219] disabling docker service ...
	I0103 20:13:30.802171   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:30.815796   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:30.827982   62050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:30.952442   62050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:31.068759   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:31.083264   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:31.102893   62050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:31.102979   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.112366   62050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:31.112433   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.122940   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.133385   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.144251   62050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:31.155210   62050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:31.164488   62050 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:31.164552   62050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:31.177632   62050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:31.186983   62050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:31.309264   62050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:31.493626   62050 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:31.493706   62050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:31.504103   62050 start.go:543] Will wait 60s for crictl version
	I0103 20:13:31.504187   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:13:31.507927   62050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:31.543967   62050 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:31.544046   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.590593   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.639562   62050 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:13:30.242808   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Start
	I0103 20:13:30.242991   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring networks are active...
	I0103 20:13:30.243776   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network default is active
	I0103 20:13:30.244126   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network mk-old-k8s-version-927922 is active
	I0103 20:13:30.244504   61400 main.go:141] libmachine: (old-k8s-version-927922) Getting domain xml...
	I0103 20:13:30.245244   61400 main.go:141] libmachine: (old-k8s-version-927922) Creating domain...
	I0103 20:13:31.553239   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting to get IP...
	I0103 20:13:31.554409   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.554942   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.555022   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.554922   63030 retry.go:31] will retry after 192.654673ms: waiting for machine to come up
	I0103 20:13:31.749588   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.750035   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.750058   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.750000   63030 retry.go:31] will retry after 270.810728ms: waiting for machine to come up
	I0103 20:13:32.022736   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.023310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.023337   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.023280   63030 retry.go:31] will retry after 327.320898ms: waiting for machine to come up
	I0103 20:13:32.352845   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.353453   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.353501   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.353395   63030 retry.go:31] will retry after 575.525231ms: waiting for machine to come up
	I0103 20:13:32.930217   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.930833   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.930859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.930741   63030 retry.go:31] will retry after 571.986596ms: waiting for machine to come up
	I0103 20:13:30.936363   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:32.939164   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:29.833307   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.833374   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.844819   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.333870   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.333936   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.345802   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.833281   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.833400   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.848469   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.334071   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.334151   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.346445   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.833944   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.834034   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.848925   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.333349   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.333432   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.349173   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.833632   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.833696   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.848186   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.333659   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.333757   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.349560   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.834221   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.834309   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.846637   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:34.334219   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.334299   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.350703   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.641182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:31.644371   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644677   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:31.644712   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644971   62050 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:31.649106   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:31.662256   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:13:31.662380   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:31.701210   62050 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:13:31.701275   62050 ssh_runner.go:195] Run: which lz4
	I0103 20:13:31.704890   62050 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:31.708756   62050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:31.708783   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:13:33.543202   62050 crio.go:444] Took 1.838336 seconds to copy over tarball
	I0103 20:13:33.543282   62050 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:33.504797   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:33.505336   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:33.505363   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:33.505286   63030 retry.go:31] will retry after 593.865088ms: waiting for machine to come up
	I0103 20:13:34.101055   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:34.101559   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:34.101593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:34.101507   63030 retry.go:31] will retry after 1.016460442s: waiting for machine to come up
	I0103 20:13:35.119877   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:35.120383   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:35.120415   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:35.120352   63030 retry.go:31] will retry after 1.462823241s: waiting for machine to come up
	I0103 20:13:36.585467   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:36.585968   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:36.585993   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:36.585932   63030 retry.go:31] will retry after 1.213807131s: waiting for machine to come up
	I0103 20:13:37.801504   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:37.801970   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:37.801999   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:37.801896   63030 retry.go:31] will retry after 1.961227471s: waiting for machine to come up
	I0103 20:13:35.435661   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:37.435870   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:34.834090   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.834160   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.848657   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.333723   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.333809   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.348582   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.834128   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.834208   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.845911   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.333385   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.333512   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.346391   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.833978   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.834054   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.847134   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.333698   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.333785   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.346411   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.834024   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.834141   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.846961   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.333461   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.333665   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.346713   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.834378   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.834470   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.848473   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.333266   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.333347   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.345638   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.345664   62015 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:39.345692   62015 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:39.345721   62015 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:39.345792   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:39.387671   62015 cri.go:89] found id: ""
	I0103 20:13:39.387778   62015 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:39.403523   62015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:39.413114   62015 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:39.413188   62015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421503   62015 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421527   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:39.561406   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:36.473303   62050 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.929985215s)
	I0103 20:13:36.473337   62050 crio.go:451] Took 2.930104 seconds to extract the tarball
	I0103 20:13:36.473350   62050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:36.513202   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:36.557201   62050 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:13:36.557231   62050 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:13:36.557314   62050 ssh_runner.go:195] Run: crio config
	I0103 20:13:36.618916   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:36.618948   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:36.618982   62050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:36.619007   62050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.139 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018788 NodeName:default-k8s-diff-port-018788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:36.619167   62050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.139
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:36.619242   62050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-018788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0103 20:13:36.619294   62050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:13:36.628488   62050 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:36.628571   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:36.637479   62050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0103 20:13:36.652608   62050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:13:36.667432   62050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0103 20:13:36.683138   62050 ssh_runner.go:195] Run: grep 192.168.39.139	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:36.687022   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:36.698713   62050 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788 for IP: 192.168.39.139
	I0103 20:13:36.698755   62050 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:36.698948   62050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:36.699009   62050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:36.699098   62050 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.key
	I0103 20:13:36.699157   62050 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key.7716debd
	I0103 20:13:36.699196   62050 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key
	I0103 20:13:36.699287   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:36.699314   62050 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:36.699324   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:36.699349   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:36.699370   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:36.699395   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:36.699434   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:36.700045   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:36.721872   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:13:36.744733   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:36.772245   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 20:13:36.796690   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:36.819792   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:36.843109   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:36.866679   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:36.889181   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:36.912082   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:36.935621   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:36.959090   62050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:36.974873   62050 ssh_runner.go:195] Run: openssl version
	I0103 20:13:36.980449   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:36.990278   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995822   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995903   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:37.001504   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:37.011628   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:37.021373   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025697   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025752   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.031286   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:37.041075   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:37.050789   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055584   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055647   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.061079   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:37.070792   62050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:37.075050   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:37.081170   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:37.087372   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:37.093361   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:37.099203   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:37.104932   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:37.110783   62050 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:37.110955   62050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:37.111003   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:37.146687   62050 cri.go:89] found id: ""
	I0103 20:13:37.146766   62050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:37.156789   62050 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:37.156808   62050 kubeadm.go:636] restartCluster start
	I0103 20:13:37.156882   62050 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:37.166168   62050 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.167346   62050 kubeconfig.go:92] found "default-k8s-diff-port-018788" server: "https://192.168.39.139:8444"
	I0103 20:13:37.169750   62050 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:37.178965   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.179035   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.190638   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.679072   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.679142   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.691149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.179709   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.179804   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.191656   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.679825   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.679912   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.693380   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.179927   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.180042   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.193368   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.679947   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.680049   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.692444   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:40.179510   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.179600   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.192218   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.764226   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:39.764651   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:39.764681   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:39.764592   63030 retry.go:31] will retry after 2.38598238s: waiting for machine to come up
	I0103 20:13:42.151992   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:42.152486   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:42.152517   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:42.152435   63030 retry.go:31] will retry after 3.320569317s: waiting for machine to come up
	I0103 20:13:39.438887   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:41.441552   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:40.707462   62015 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.146014282s)
	I0103 20:13:40.707501   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:40.913812   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.008294   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.093842   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:41.093931   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:41.594484   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.094333   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.594647   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.094744   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.594323   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.628624   62015 api_server.go:72] duration metric: took 2.534781213s to wait for apiserver process to appear ...
	I0103 20:13:43.628653   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:43.628674   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:40.679867   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.679959   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.692707   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.179865   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.179962   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.192901   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.679604   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.679668   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.691755   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.179959   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.180082   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.193149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.679682   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.679808   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.696777   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.179236   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.179343   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.195021   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.679230   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.679339   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.696886   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.179488   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.179558   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.194865   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.679087   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.679216   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.693383   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.179505   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.179607   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.190496   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.474145   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:45.474596   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:45.474623   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:45.474542   63030 retry.go:31] will retry after 3.652901762s: waiting for machine to come up
	I0103 20:13:43.937146   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:45.938328   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.941499   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.277935   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:47.277971   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:47.277988   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.543418   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.543449   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:47.629720   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.635340   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.635373   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.128849   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.135534   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:48.135576   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.628977   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.634609   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:13:48.643475   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:13:48.643505   62015 api_server.go:131] duration metric: took 5.01484434s to wait for apiserver health ...
	I0103 20:13:48.643517   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:48.643526   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:48.645945   62015 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:48.647556   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:48.671093   62015 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:48.698710   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:48.712654   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:48.712704   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:48.712717   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:48.712729   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:48.712739   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:48.712761   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:48.712771   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:48.712780   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:48.712793   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:48.712806   62015 system_pods.go:74] duration metric: took 14.071881ms to wait for pod list to return data ...
	I0103 20:13:48.712818   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:48.716271   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:48.716301   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:48.716326   62015 node_conditions.go:105] duration metric: took 3.496257ms to run NodePressure ...
	I0103 20:13:48.716348   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:49.020956   62015 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:49.025982   62015 kubeadm.go:787] kubelet initialised
	I0103 20:13:49.026003   62015 kubeadm.go:788] duration metric: took 5.022549ms waiting for restarted kubelet to initialise ...
	I0103 20:13:49.026010   62015 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:49.033471   62015 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.038777   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038806   62015 pod_ready.go:81] duration metric: took 5.286579ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.038823   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038834   62015 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.044324   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044349   62015 pod_ready.go:81] duration metric: took 5.506628ms waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.044357   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044363   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.049022   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049058   62015 pod_ready.go:81] duration metric: took 4.681942ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.049068   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049073   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.102378   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102407   62015 pod_ready.go:81] duration metric: took 53.323019ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.102415   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102424   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.504820   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504852   62015 pod_ready.go:81] duration metric: took 402.417876ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.504865   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504875   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.905230   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905265   62015 pod_ready.go:81] duration metric: took 400.380902ms waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.905278   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905287   62015 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:50.304848   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304883   62015 pod_ready.go:81] duration metric: took 399.567527ms waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:50.304896   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304905   62015 pod_ready.go:38] duration metric: took 1.278887327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:50.304926   62015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:50.331405   62015 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:50.331428   62015 kubeadm.go:640] restartCluster took 21.020194358s
	I0103 20:13:50.331439   62015 kubeadm.go:406] StartCluster complete in 21.075864121s
	I0103 20:13:50.331459   62015 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.331541   62015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:50.333553   62015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.333969   62015 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:50.334045   62015 addons.go:69] Setting storage-provisioner=true in profile "no-preload-749210"
	I0103 20:13:50.334064   62015 addons.go:237] Setting addon storage-provisioner=true in "no-preload-749210"
	W0103 20:13:50.334072   62015 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:50.334082   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:50.334121   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.334129   62015 addons.go:69] Setting default-storageclass=true in profile "no-preload-749210"
	I0103 20:13:50.334143   62015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-749210"
	I0103 20:13:50.334556   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334588   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334602   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334620   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334681   62015 addons.go:69] Setting metrics-server=true in profile "no-preload-749210"
	I0103 20:13:50.334708   62015 addons.go:237] Setting addon metrics-server=true in "no-preload-749210"
	I0103 20:13:50.334712   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	W0103 20:13:50.334717   62015 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:50.334756   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.335152   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.335190   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.343173   62015 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-749210" context rescaled to 1 replicas
	I0103 20:13:50.343213   62015 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:50.345396   62015 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:50.347721   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:50.353122   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I0103 20:13:50.353250   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0103 20:13:50.353274   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0103 20:13:50.353737   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.353896   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354283   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354299   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354488   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354491   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354588   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354889   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355115   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.355165   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.355181   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.355244   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355746   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.356199   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356239   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.356792   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356830   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.359095   62015 addons.go:237] Setting addon default-storageclass=true in "no-preload-749210"
	W0103 20:13:50.359114   62015 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:50.359139   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.359554   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.359595   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.377094   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0103 20:13:50.377218   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0103 20:13:50.377679   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.377779   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.378353   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378376   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378472   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378488   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378816   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.378874   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.381013   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.381240   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.389265   62015 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:50.383848   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I0103 20:13:50.391000   62015 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.391023   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:50.391049   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391062   62015 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:45.679265   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.679374   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.690232   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.179862   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.179963   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.190942   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.679624   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.679738   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.691578   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.179185   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:47.179280   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:47.193995   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.194029   62050 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:47.194050   62050 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:47.194061   62050 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:47.194114   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:47.235512   62050 cri.go:89] found id: ""
	I0103 20:13:47.235625   62050 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:47.251115   62050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:47.261566   62050 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:47.261631   62050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271217   62050 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271244   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:47.408550   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.262356   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.492357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.597607   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.699097   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:48.699194   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.199349   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.699758   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.199818   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.392557   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:50.392577   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:50.392597   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391469   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.393835   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.393854   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.394340   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.394967   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395384   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.395419   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.395602   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.395663   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.395683   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.395981   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.396173   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.398544   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399117   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.399142   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399363   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.399582   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.399692   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.399761   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.434719   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I0103 20:13:50.435279   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.435938   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.435972   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.436407   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.436630   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.438992   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.442816   62015 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.442835   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:50.442856   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.450157   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.451549   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.451575   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.451571   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.453023   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.453577   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.453753   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.556135   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:50.556161   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:50.583620   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:50.583643   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:50.589708   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.614203   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.631936   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.631961   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:50.708658   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.772364   62015 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:50.772434   62015 node_ready.go:35] waiting up to 6m0s for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:51.785361   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195620446s)
	I0103 20:13:51.785407   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785421   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785427   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171187695s)
	I0103 20:13:51.785463   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785488   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785603   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076908391s)
	I0103 20:13:51.785687   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785717   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.785730   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.785739   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785741   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785748   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785819   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785837   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786108   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786143   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786152   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786166   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.786178   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786444   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786495   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786536   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786553   62015 addons.go:473] Verifying addon metrics-server=true in "no-preload-749210"
	I0103 20:13:51.787346   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787365   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787376   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.787386   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.787596   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787638   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787652   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787855   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787859   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787871   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.797560   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.797584   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.797860   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.797874   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.800087   62015 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0103 20:13:49.131462   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132013   61400 main.go:141] libmachine: (old-k8s-version-927922) Found IP for machine: 192.168.72.12
	I0103 20:13:49.132041   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserving static IP address...
	I0103 20:13:49.132059   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has current primary IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132507   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.132543   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | skip adding static IP to network mk-old-k8s-version-927922 - found existing host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"}
	I0103 20:13:49.132560   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserved static IP address: 192.168.72.12
	I0103 20:13:49.132582   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting for SSH to be available...
	I0103 20:13:49.132597   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Getting to WaitForSSH function...
	I0103 20:13:49.135129   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135499   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.135536   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135703   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH client type: external
	I0103 20:13:49.135728   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa (-rw-------)
	I0103 20:13:49.135765   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:49.135780   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | About to run SSH command:
	I0103 20:13:49.135796   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | exit 0
	I0103 20:13:49.226568   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:49.226890   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetConfigRaw
	I0103 20:13:49.227536   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.230668   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231038   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.231064   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231277   61400 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/config.json ...
	I0103 20:13:49.231456   61400 machine.go:88] provisioning docker machine ...
	I0103 20:13:49.231473   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:49.231708   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.231862   61400 buildroot.go:166] provisioning hostname "old-k8s-version-927922"
	I0103 20:13:49.231885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.232002   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.234637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235012   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.235048   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235196   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.235338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235445   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235543   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.235748   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.236196   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.236226   61400 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-927922 && echo "old-k8s-version-927922" | sudo tee /etc/hostname
	I0103 20:13:49.377588   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-927922
	
	I0103 20:13:49.377625   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.381244   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381634   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.381680   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.382115   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382311   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382538   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.382721   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.383096   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.383125   61400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-927922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-927922/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-927922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:49.517214   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:49.517246   61400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:49.517268   61400 buildroot.go:174] setting up certificates
	I0103 20:13:49.517280   61400 provision.go:83] configureAuth start
	I0103 20:13:49.517299   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.517606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.520819   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521255   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.521284   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521442   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.523926   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.524364   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524495   61400 provision.go:138] copyHostCerts
	I0103 20:13:49.524604   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:49.524618   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:49.524714   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:49.524842   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:49.524855   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:49.524885   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:49.524982   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:49.525020   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:49.525063   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:49.525143   61400 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-927922 san=[192.168.72.12 192.168.72.12 localhost 127.0.0.1 minikube old-k8s-version-927922]
	I0103 20:13:49.896621   61400 provision.go:172] copyRemoteCerts
	I0103 20:13:49.896687   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:49.896728   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.899859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900239   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.900274   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900456   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.900690   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.900873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.901064   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:49.993569   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:13:50.017597   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:13:50.041139   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:50.064499   61400 provision.go:86] duration metric: configureAuth took 547.178498ms
	I0103 20:13:50.064533   61400 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:50.064770   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:13:50.064848   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.068198   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.068672   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.069080   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069284   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069457   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.069640   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.070115   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.070146   61400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:50.450845   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:50.450873   61400 machine.go:91] provisioned docker machine in 1.219404511s
	I0103 20:13:50.450886   61400 start.go:300] post-start starting for "old-k8s-version-927922" (driver="kvm2")
	I0103 20:13:50.450899   61400 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:50.450924   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.451263   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:50.451328   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.455003   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455413   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.455436   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455644   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.455796   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.455919   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.456031   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.563846   61400 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:50.569506   61400 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:50.569532   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:50.569626   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:50.569726   61400 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:50.569857   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:50.581218   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:50.612328   61400 start.go:303] post-start completed in 161.425373ms
	I0103 20:13:50.612359   61400 fix.go:56] fixHost completed within 20.392994827s
	I0103 20:13:50.612383   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.615776   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616241   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.616268   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616368   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.616655   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.616849   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.617088   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.617286   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.617764   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.617791   61400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:50.740437   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312830.691065491
	
	I0103 20:13:50.740506   61400 fix.go:206] guest clock: 1704312830.691065491
	I0103 20:13:50.740528   61400 fix.go:219] Guest: 2024-01-03 20:13:50.691065491 +0000 UTC Remote: 2024-01-03 20:13:50.612363446 +0000 UTC m=+357.606588552 (delta=78.702045ms)
	I0103 20:13:50.740563   61400 fix.go:190] guest clock delta is within tolerance: 78.702045ms
	I0103 20:13:50.740574   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 20.521248173s
	I0103 20:13:50.740606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.740879   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:50.743952   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744357   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.744397   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744668   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.745932   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746302   61400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:50.746343   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.746759   61400 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:50.746784   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.749593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.749994   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.750029   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.750496   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.750738   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.750900   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.751141   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.751696   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.751779   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.751842   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.751898   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.751960   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.752031   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.752063   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.841084   61400 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:50.882564   61400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:51.041188   61400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:51.049023   61400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:51.049103   61400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:51.068267   61400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:51.068297   61400 start.go:475] detecting cgroup driver to use...
	I0103 20:13:51.068371   61400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:51.086266   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:51.101962   61400 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:51.102030   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:51.118269   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:51.134642   61400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:51.310207   61400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:51.495609   61400 docker.go:219] disabling docker service ...
	I0103 20:13:51.495743   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:51.512101   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:51.527244   61400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:51.696874   61400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:51.836885   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:51.849905   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:51.867827   61400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0103 20:13:51.867895   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.877598   61400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:51.877713   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.886744   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.898196   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.910021   61400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:51.921882   61400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:51.930668   61400 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:51.930727   61400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:51.943294   61400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:51.952273   61400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:52.065108   61400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:52.272042   61400 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:52.272143   61400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:52.277268   61400 start.go:543] Will wait 60s for crictl version
	I0103 20:13:52.277436   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:52.281294   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:52.334056   61400 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:52.334231   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.390900   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.454400   61400 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0103 20:13:52.455682   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:52.459194   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.459656   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:52.459683   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.460250   61400 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:52.465579   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:52.480500   61400 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 20:13:52.480620   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:52.532378   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:52.532450   61400 ssh_runner.go:195] Run: which lz4
	I0103 20:13:52.537132   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:52.541880   61400 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:52.541912   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0103 20:13:50.443235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:52.942235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:51.801673   62015 addons.go:508] enable addons completed in 1.467711333s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0103 20:13:52.779944   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.699945   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.199773   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.227739   62050 api_server.go:72] duration metric: took 2.52863821s to wait for apiserver process to appear ...
	I0103 20:13:51.227768   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:51.227789   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:51.228288   62050 api_server.go:269] stopped: https://192.168.39.139:8444/healthz: Get "https://192.168.39.139:8444/healthz": dial tcp 192.168.39.139:8444: connect: connection refused
	I0103 20:13:51.728906   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.679221   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.679255   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.679273   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.722466   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.722528   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.728699   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.771739   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:55.771841   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.228041   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.234578   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.234618   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.728122   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.734464   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.734505   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:57.228124   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:57.239527   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:13:57.253416   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:57.253445   62050 api_server.go:131] duration metric: took 6.025669125s to wait for apiserver health ...
	I0103 20:13:57.253456   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:57.253464   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:57.255608   62050 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:54.091654   61400 crio.go:444] Took 1.554550 seconds to copy over tarball
	I0103 20:13:54.091734   61400 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:57.252728   61400 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.160960283s)
	I0103 20:13:57.252762   61400 crio.go:451] Took 3.161068 seconds to extract the tarball
	I0103 20:13:57.252773   61400 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:57.307431   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:57.362170   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:57.362199   61400 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:57.362266   61400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.362306   61400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.362491   61400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.362505   61400 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0103 20:13:57.362630   61400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.362663   61400 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.362749   61400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.362830   61400 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364964   61400 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364981   61400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.364999   61400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.365049   61400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.365081   61400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.365159   61400 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0103 20:13:57.365337   61400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.365364   61400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.585886   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.611291   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0103 20:13:57.622467   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.623443   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.627321   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.630211   61400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0103 20:13:57.630253   61400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.630299   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.647358   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.670079   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.724516   61400 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0103 20:13:57.724560   61400 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0103 20:13:57.724606   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.747338   61400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0103 20:13:57.747387   61400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.747451   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767682   61400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0103 20:13:57.767741   61400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.767749   61400 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0103 20:13:57.767772   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.767782   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767778   61400 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.767834   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811841   61400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0103 20:13:57.811895   61400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.811861   61400 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0103 20:13:57.811948   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811984   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0103 20:13:57.811948   61400 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.812053   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.812098   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.812128   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.849648   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0103 20:13:57.849722   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.916421   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0103 20:13:57.916483   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.916529   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0103 20:13:57.936449   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.936474   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0103 20:13:57.936485   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0103 20:13:57.936538   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0103 20:13:55.436957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:57.441634   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:55.278078   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:57.280673   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:58.185787   62015 node_ready.go:49] node "no-preload-749210" has status "Ready":"True"
	I0103 20:13:58.185819   62015 node_ready.go:38] duration metric: took 7.413368774s waiting for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:58.185837   62015 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:58.196599   62015 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203024   62015 pod_ready.go:92] pod "coredns-76f75df574-rbx58" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:58.203047   62015 pod_ready.go:81] duration metric: took 6.423108ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203057   62015 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:57.257123   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:57.293641   62050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:57.341721   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:57.360995   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:57.361054   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:57.361065   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:57.361109   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:57.361132   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:57.361147   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:57.361171   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:57.361189   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:57.361198   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:57.361207   62050 system_pods.go:74] duration metric: took 19.402129ms to wait for pod list to return data ...
	I0103 20:13:57.361218   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:57.369396   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:57.369435   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:57.369449   62050 node_conditions.go:105] duration metric: took 8.224276ms to run NodePressure ...
	I0103 20:13:57.369470   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:57.615954   62050 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624280   62050 kubeadm.go:787] kubelet initialised
	I0103 20:13:57.624312   62050 kubeadm.go:788] duration metric: took 8.328431ms waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624321   62050 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:57.637920   62050 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.734401   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734439   62050 pod_ready.go:81] duration metric: took 1.096478242s waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:58.734454   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734463   62050 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:59.605120   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605156   62050 pod_ready.go:81] duration metric: took 870.676494ms waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:59.605168   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605174   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.176543   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176583   62050 pod_ready.go:81] duration metric: took 571.400586ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.176599   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176608   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.201556   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201620   62050 pod_ready.go:81] duration metric: took 24.987825ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.201637   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201647   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.233069   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233108   62050 pod_ready.go:81] duration metric: took 31.451633ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.233127   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233135   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.253505   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253534   62050 pod_ready.go:81] duration metric: took 20.386039ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.253550   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253559   62050 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.272626   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272661   62050 pod_ready.go:81] duration metric: took 19.09311ms waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.272677   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272687   62050 pod_ready.go:38] duration metric: took 2.64835186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:00.272705   62050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:00.321126   62050 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:00.321189   62050 kubeadm.go:640] restartCluster took 23.164374098s
	I0103 20:14:00.321205   62050 kubeadm.go:406] StartCluster complete in 23.210428007s
	I0103 20:14:00.321226   62050 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.321322   62050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:00.323470   62050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.323925   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:00.324242   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:14:00.324381   62050 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:00.324467   62050 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.324487   62050 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.324495   62050 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:00.324536   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.324984   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325013   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325285   62050 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325304   62050 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325329   62050 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.325337   62050 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:00.325376   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.325309   62050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018788"
	I0103 20:14:00.325722   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325740   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325935   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.326021   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.347496   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0103 20:14:00.347895   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.348392   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.348415   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.348728   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.349192   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.349228   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.349916   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0103 20:14:00.350369   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.351043   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.351067   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.351579   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.352288   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.352392   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.358540   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0103 20:14:00.359079   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.359582   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.359607   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.359939   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.360114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.364583   62050 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.364614   62050 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:00.364645   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.365032   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.365080   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.365268   62050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-018788" context rescaled to 1 replicas
	I0103 20:14:00.365315   62050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:00.367628   62050 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:00.376061   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:00.382421   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0103 20:14:00.382601   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0103 20:14:00.382708   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0103 20:14:00.383285   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383310   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383855   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.383862   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.384200   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384674   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.384701   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.384740   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384914   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.386513   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.387010   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.387325   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.387343   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.389302   62050 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:00.390931   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:00.390945   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:00.390960   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.390651   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.392318   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.394641   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395185   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.395212   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395483   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.395954   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.398448   62050 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:00.400431   62050 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.400454   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:00.400476   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.404480   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405112   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.405145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.405971   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.407610   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.407808   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.410796   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.410964   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.411436   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.417626   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0103 20:14:00.418201   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.422710   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.422743   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.423232   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.423421   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.425364   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.425678   62050 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.425697   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:00.425717   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.429190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429720   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.429745   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429898   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.430599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.430803   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.430946   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.621274   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:00.621356   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:00.641979   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.681414   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.682076   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:00.682118   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:00.760063   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.760095   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:00.833648   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.840025   62050 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:00.840147   62050 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.78156374s)
	I0103 20:14:02.423631   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423646   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.742133551s)
	I0103 20:14:02.423765   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423784   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423889   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.423906   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.423920   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423930   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424042   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424061   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424078   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.424076   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.424104   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424125   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424137   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424472   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424489   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424502   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.431339   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.431368   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.431754   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.431789   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.431809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.575829   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742131608s)
	I0103 20:14:02.575880   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.575899   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576351   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576374   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576391   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.576400   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576619   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576632   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576641   62050 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-018788"
	I0103 20:14:02.578918   62050 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:13:58.180342   61400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0103 20:13:58.180407   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0103 20:13:58.180464   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0103 20:13:58.194447   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:58.726157   61400 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0103 20:13:58.726232   61400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0103 20:14:00.187852   61400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.461700942s)
	I0103 20:14:00.187973   61400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461718478s)
	I0103 20:14:00.188007   61400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0103 20:14:00.188104   61400 cache_images.go:92] LoadImages completed in 2.825887616s
	W0103 20:14:00.188202   61400 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0103 20:14:00.188285   61400 ssh_runner.go:195] Run: crio config
	I0103 20:14:00.270343   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:00.270372   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:00.270393   61400 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:14:00.270416   61400 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.12 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-927922 NodeName:old-k8s-version-927922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 20:14:00.270624   61400 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-927922"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-927922
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.12:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:14:00.270765   61400 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-927922 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:14:00.270842   61400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0103 20:14:00.282011   61400 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:14:00.282093   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:14:00.292954   61400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0103 20:14:00.314616   61400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:14:00.366449   61400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0103 20:14:00.406579   61400 ssh_runner.go:195] Run: grep 192.168.72.12	control-plane.minikube.internal$ /etc/hosts
	I0103 20:14:00.410923   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:14:00.430315   61400 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922 for IP: 192.168.72.12
	I0103 20:14:00.430352   61400 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.430553   61400 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:14:00.430619   61400 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:14:00.430718   61400 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.key
	I0103 20:14:00.430798   61400 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key.9a91cab3
	I0103 20:14:00.430854   61400 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key
	I0103 20:14:00.431018   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:14:00.431071   61400 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:14:00.431083   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:14:00.431123   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:14:00.431158   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:14:00.431195   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:14:00.431250   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:14:00.432123   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:14:00.472877   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:14:00.505153   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:14:00.533850   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:14:00.564548   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:14:00.596464   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:14:00.626607   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:14:00.655330   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:14:00.681817   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:14:00.711039   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:14:00.742406   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:14:00.768583   61400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:14:00.786833   61400 ssh_runner.go:195] Run: openssl version
	I0103 20:14:00.793561   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:14:00.807558   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812755   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812816   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.820657   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:14:00.832954   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:14:00.844707   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850334   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850425   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.856592   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:14:00.868105   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:14:00.881551   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886462   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886550   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.892487   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:14:00.904363   61400 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:14:00.909429   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:14:00.915940   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:14:00.922496   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:14:00.928504   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:14:00.936016   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:14:00.943008   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:14:00.949401   61400 kubeadm.go:404] StartCluster: {Name:old-k8s-version-927922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:14:00.949524   61400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:14:00.949614   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:00.999406   61400 cri.go:89] found id: ""
	I0103 20:14:00.999494   61400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:14:01.011041   61400 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:14:01.011063   61400 kubeadm.go:636] restartCluster start
	I0103 20:14:01.011130   61400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:14:01.024488   61400 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.026094   61400 kubeconfig.go:92] found "old-k8s-version-927922" server: "https://192.168.72.12:8443"
	I0103 20:14:01.029577   61400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:14:01.041599   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.041674   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.055545   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.542034   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.542135   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.554826   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.042049   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.042166   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.056693   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.542275   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.542363   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.557025   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:03.041864   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.041968   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.054402   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:59.937077   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.440275   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.287822   62015 pod_ready.go:102] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.712464   62015 pod_ready.go:92] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.712486   62015 pod_ready.go:81] duration metric: took 2.509421629s waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.712494   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722133   62015 pod_ready.go:92] pod "kube-apiserver-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.722175   62015 pod_ready.go:81] duration metric: took 9.671952ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722188   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728860   62015 pod_ready.go:92] pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.728888   62015 pod_ready.go:81] duration metric: took 6.691622ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728901   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736669   62015 pod_ready.go:92] pod "kube-proxy-5hwf4" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.736690   62015 pod_ready.go:81] duration metric: took 7.783204ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736699   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245720   62015 pod_ready.go:92] pod "kube-scheduler-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:02.245750   62015 pod_ready.go:81] duration metric: took 1.509042822s waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245764   62015 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:04.253082   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.580440   62050 addons.go:508] enable addons completed in 2.256058454s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:02.845486   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:05.343961   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:03.542326   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.542407   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.554128   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.041685   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.041779   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.053727   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.542332   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.542417   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.554478   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.042026   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.042120   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.055763   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.541892   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.541996   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.554974   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.042576   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.042675   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.055902   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.542543   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.542636   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.555494   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.041757   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.041844   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.053440   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.542083   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.542162   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.555336   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:08.041841   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.041929   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.055229   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.936356   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.938795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.754049   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:09.253568   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.345058   62050 node_ready.go:49] node "default-k8s-diff-port-018788" has status "Ready":"True"
	I0103 20:14:06.345083   62050 node_ready.go:38] duration metric: took 5.505020144s waiting for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:06.345094   62050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:06.351209   62050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357786   62050 pod_ready.go:92] pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:06.357811   62050 pod_ready.go:81] duration metric: took 6.576128ms waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357819   62050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:08.365570   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.366402   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:08.542285   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.542428   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.554155   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.041695   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.041800   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.054337   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.541733   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.541817   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.554231   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.041785   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.041863   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.053870   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.541893   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.541988   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.554220   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.042579   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:11.042662   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:11.054683   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.054717   61400 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:14:11.054728   61400 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:14:11.054738   61400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:14:11.054804   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:11.099741   61400 cri.go:89] found id: ""
	I0103 20:14:11.099806   61400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:14:11.115939   61400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:14:11.125253   61400 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:14:11.125309   61400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134126   61400 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134151   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:11.244373   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.026578   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.238755   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.326635   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.411494   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:12.411597   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:12.912324   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:09.437304   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.937833   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.755341   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.254295   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.864860   62050 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.864892   62050 pod_ready.go:81] duration metric: took 4.507065243s waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.864906   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871510   62050 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.871532   62050 pod_ready.go:81] duration metric: took 6.618246ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871542   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877385   62050 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.877411   62050 pod_ready.go:81] duration metric: took 5.859396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877423   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883355   62050 pod_ready.go:92] pod "kube-proxy-wqjlv" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.883381   62050 pod_ready.go:81] duration metric: took 5.949857ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883391   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888160   62050 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.888186   62050 pod_ready.go:81] duration metric: took 4.782893ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888198   62050 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:12.896310   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.897306   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:13.412544   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.912006   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.939301   61400 api_server.go:72] duration metric: took 1.527807222s to wait for apiserver process to appear ...
	I0103 20:14:13.939328   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:13.939357   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:13.941001   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.438272   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.752567   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.758446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:17.397429   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:19.399199   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.940403   61400 api_server.go:269] stopped: https://192.168.72.12:8443/healthz: Get "https://192.168.72.12:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0103 20:14:18.940444   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.563874   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.563907   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.563925   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.591366   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.591397   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.939684   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.951743   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:19.951795   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.439712   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.448251   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:20.448289   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.939773   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.946227   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:20.954666   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:20.954702   61400 api_server.go:131] duration metric: took 7.015366394s to wait for apiserver health ...
	I0103 20:14:20.954718   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:20.954726   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:20.956786   61400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:14:20.958180   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:14:20.969609   61400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:14:20.986353   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:20.996751   61400 system_pods.go:59] 8 kube-system pods found
	I0103 20:14:20.996786   61400 system_pods.go:61] "coredns-5644d7b6d9-99qhg" [d43c98b2-5ed4-42a7-bdb9-28f5b3c7b99f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:14:20.996795   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:20.996804   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:20.996811   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:20.996821   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Pending
	I0103 20:14:20.996828   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:20.996835   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:20.996845   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:20.996857   61400 system_pods.go:74] duration metric: took 10.474644ms to wait for pod list to return data ...
	I0103 20:14:20.996870   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:21.000635   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:21.000665   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:21.000677   61400 node_conditions.go:105] duration metric: took 3.80125ms to run NodePressure ...
	I0103 20:14:21.000698   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:21.233310   61400 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241408   61400 kubeadm.go:787] kubelet initialised
	I0103 20:14:21.241445   61400 kubeadm.go:788] duration metric: took 8.096237ms waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241456   61400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:21.251897   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.264624   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264657   61400 pod_ready.go:81] duration metric: took 12.728783ms waiting for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.264670   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264700   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.282371   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282400   61400 pod_ready.go:81] duration metric: took 17.657706ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.282410   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282416   61400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.288986   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289016   61400 pod_ready.go:81] duration metric: took 6.590018ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.289028   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289036   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.391318   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391358   61400 pod_ready.go:81] duration metric: took 102.309139ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.391371   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391390   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.790147   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790184   61400 pod_ready.go:81] duration metric: took 398.776559ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.790202   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790213   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.190088   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190118   61400 pod_ready.go:81] duration metric: took 399.895826ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.190132   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190146   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.590412   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590470   61400 pod_ready.go:81] duration metric: took 400.308646ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.590484   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590494   61400 pod_ready.go:38] duration metric: took 1.349028144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:22.590533   61400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:22.610035   61400 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:22.610060   61400 kubeadm.go:640] restartCluster took 21.598991094s
	I0103 20:14:22.610071   61400 kubeadm.go:406] StartCluster complete in 21.660680377s
	I0103 20:14:22.610091   61400 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.610178   61400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:22.613053   61400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.613314   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:22.613472   61400 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:22.613563   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:14:22.613570   61400 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613584   61400 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613597   61400 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613625   61400 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-927922"
	W0103 20:14:22.613637   61400 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:22.613639   61400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-927922"
	I0103 20:14:22.613605   61400 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-927922"
	W0103 20:14:22.613706   61400 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:22.613769   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.613691   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.614097   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614129   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614170   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614204   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614293   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614334   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.631032   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0103 20:14:22.631689   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.632149   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.632172   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.632553   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.632811   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0103 20:14:22.632820   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0103 20:14:22.633222   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633340   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.633352   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633385   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.633695   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.633719   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634106   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634117   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.634139   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634544   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634711   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.634782   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.634821   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.639076   61400 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-927922"
	W0103 20:14:22.639233   61400 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:22.639274   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.640636   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.640703   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.653581   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0103 20:14:22.654135   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.654693   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.654720   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.655050   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.655267   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.655611   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45149
	I0103 20:14:22.656058   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.656503   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.656527   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.656976   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.657189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.657904   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.660090   61400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:22.659044   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.659283   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0103 20:14:22.663010   61400 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.663022   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:22.663037   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.664758   61400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:22.663341   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.665665   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666177   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.666201   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666255   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:22.666266   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:22.666282   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.666382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.666505   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.666726   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.666884   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.666901   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.666926   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.667344   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.667940   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.667983   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.668718   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.668933   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.668961   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.669116   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.669262   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.669388   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.669506   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.711545   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0103 20:14:22.711969   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.712493   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.712519   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.712853   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.713077   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.715086   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.715371   61400 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.715390   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:22.715405   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.718270   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718638   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.718671   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718876   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.719076   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.719263   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.719451   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.795601   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.887631   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:22.887660   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:22.889717   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.932293   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:22.932324   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:22.939480   61400 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:22.979425   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:22.979455   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:23.017495   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:23.255786   61400 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-927922" context rescaled to 1 replicas
	I0103 20:14:23.255832   61400 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:23.257785   61400 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:18.937821   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.435750   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.438082   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.259380   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:23.430371   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430402   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430532   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430557   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430710   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.430741   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.430778   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.430798   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430806   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432333   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432345   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432353   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432363   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432373   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.432382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432383   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432394   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432602   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432654   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432674   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438313   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.438335   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.438566   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.438585   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438662   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.598304   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598363   61400 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:23.598669   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598687   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598696   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598705   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598917   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598938   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598960   61400 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-927922"
	I0103 20:14:23.601038   61400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:14:21.253707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.254276   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.399352   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.895781   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.602562   61400 addons.go:508] enable addons completed in 989.095706ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:25.602268   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:27.602561   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:25.439366   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:27.934938   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:25.753982   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.253688   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:26.398696   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.896789   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:29.603040   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:30.102640   61400 node_ready.go:49] node "old-k8s-version-927922" has status "Ready":"True"
	I0103 20:14:30.102663   61400 node_ready.go:38] duration metric: took 6.504277703s waiting for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:30.102672   61400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.107593   61400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112792   61400 pod_ready.go:92] pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.112817   61400 pod_ready.go:81] duration metric: took 5.195453ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112828   61400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117802   61400 pod_ready.go:92] pod "etcd-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.117827   61400 pod_ready.go:81] duration metric: took 4.989616ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117839   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123548   61400 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.123570   61400 pod_ready.go:81] duration metric: took 5.723206ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123580   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128232   61400 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.128257   61400 pod_ready.go:81] duration metric: took 4.670196ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128269   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503735   61400 pod_ready.go:92] pod "kube-proxy-jk7jw" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.503782   61400 pod_ready.go:81] duration metric: took 375.504442ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503796   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903117   61400 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.903145   61400 pod_ready.go:81] duration metric: took 399.341883ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903155   61400 pod_ready.go:38] duration metric: took 800.474934ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.903167   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:30.903215   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:30.917506   61400 api_server.go:72] duration metric: took 7.661640466s to wait for apiserver process to appear ...
	I0103 20:14:30.917537   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:30.917558   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:30.923921   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:30.924810   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:30.924830   61400 api_server.go:131] duration metric: took 7.286806ms to wait for apiserver health ...
	I0103 20:14:30.924837   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:31.105108   61400 system_pods.go:59] 7 kube-system pods found
	I0103 20:14:31.105140   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.105144   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.105149   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.105153   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.105156   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.105160   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.105164   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.105168   61400 system_pods.go:74] duration metric: took 180.326535ms to wait for pod list to return data ...
	I0103 20:14:31.105176   61400 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:14:31.303919   61400 default_sa.go:45] found service account: "default"
	I0103 20:14:31.303945   61400 default_sa.go:55] duration metric: took 198.763782ms for default service account to be created ...
	I0103 20:14:31.303952   61400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:14:31.504913   61400 system_pods.go:86] 7 kube-system pods found
	I0103 20:14:31.504942   61400 system_pods.go:89] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.504948   61400 system_pods.go:89] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.504952   61400 system_pods.go:89] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.504960   61400 system_pods.go:89] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.504964   61400 system_pods.go:89] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.504967   61400 system_pods.go:89] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.504971   61400 system_pods.go:89] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.504978   61400 system_pods.go:126] duration metric: took 201.020363ms to wait for k8s-apps to be running ...
	I0103 20:14:31.504987   61400 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:14:31.505042   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:31.519544   61400 system_svc.go:56] duration metric: took 14.547054ms WaitForService to wait for kubelet.
	I0103 20:14:31.519581   61400 kubeadm.go:581] duration metric: took 8.263723255s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:14:31.519604   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:31.703367   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:31.703393   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:31.703402   61400 node_conditions.go:105] duration metric: took 183.794284ms to run NodePressure ...
	I0103 20:14:31.703413   61400 start.go:228] waiting for startup goroutines ...
	I0103 20:14:31.703419   61400 start.go:233] waiting for cluster config update ...
	I0103 20:14:31.703427   61400 start.go:242] writing updated cluster config ...
	I0103 20:14:31.703726   61400 ssh_runner.go:195] Run: rm -f paused
	I0103 20:14:31.752491   61400 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0103 20:14:31.754609   61400 out.go:177] 
	W0103 20:14:31.756132   61400 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0103 20:14:31.757531   61400 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0103 20:14:31.758908   61400 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-927922" cluster and "default" namespace by default
	I0103 20:14:29.937557   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.437025   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.253875   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.752584   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.898036   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:33.398935   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.936535   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.436533   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.753233   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.252419   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.253992   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:35.896170   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.897520   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:40.397608   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.438748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.439514   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.254480   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.756719   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:42.397869   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:44.398305   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.935597   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:45.936279   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:47.939184   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.253445   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:48.254497   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.896653   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:49.395106   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.436008   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:52.436929   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.754391   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.253984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:51.396664   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.895621   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:54.937380   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.435980   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:55.254262   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.254379   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:56.399473   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:58.895378   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.436517   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:01.436644   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:03.437289   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.754343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.256605   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:00.896080   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.896456   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.396614   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.935218   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.936528   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:04.753320   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:06.753702   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:08.754470   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.909774   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.398298   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.435847   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.436285   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.755735   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:13.260340   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.898368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.395141   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:14.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:16.437752   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.753850   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.252984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:17.396224   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:19.396412   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.935744   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.936627   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:22.937157   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.753996   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.252893   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:21.396466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.396556   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.435441   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.437177   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.253294   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.257573   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.895526   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.897999   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:30.396749   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.935811   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:31.936769   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.754895   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.252296   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.252439   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.398706   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.895914   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.435649   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.435937   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.253151   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.753045   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.897764   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:39.395522   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.935209   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.935922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:42.936185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.753242   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.254160   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:41.395722   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.895476   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:44.938043   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.436185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.753607   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.757575   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.895628   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.898831   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.395366   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:49.437057   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:51.936658   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.254313   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.754096   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.396047   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:54.896005   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:53.937359   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.939092   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:58.435858   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.253159   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:57.752873   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:56.897368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.397094   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:00.937099   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:02.937220   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.753924   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.754227   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:04.253189   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.895645   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:03.895950   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:05.435964   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:07.437247   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.753405   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.252564   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.395775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:08.397119   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.937945   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:12.436531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:11.254482   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.753409   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:10.898350   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.397549   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:14.936753   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.438482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.753689   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:18.253420   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.895365   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.897998   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.898464   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.935559   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:21.935664   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:20.253748   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.253878   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.254457   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.395466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.400100   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:23.935958   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:25.936631   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:28.436748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.752881   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.253740   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.897228   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.396925   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:30.436921   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:32.939573   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.254681   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.759891   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.895948   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.899819   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:35.436828   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:37.437536   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.252972   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.254083   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.396572   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.895816   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:39.440085   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:41.939589   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.752960   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:42.753342   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.897788   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:43.396277   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.437295   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:46.934854   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.753613   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.253118   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:45.896539   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.897012   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:50.399452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:48.936795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:51.435353   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:53.436742   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:49.753890   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.252908   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.253390   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.895504   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.896960   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:55.937358   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.435997   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.256446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.754312   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.898710   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.899652   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:00.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:02.936336   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.254343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.754483   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.398833   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.896269   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.437531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.935848   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.755471   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.756171   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.897369   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:08.397436   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:09.936237   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:11.940482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.253599   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:12.254176   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.254316   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.898370   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:13.400421   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.436922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.936283   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.753503   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.253120   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:15.896003   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:18.396552   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.438479   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.936957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.253522   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.752947   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:20.895961   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.395452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:24.435005   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437797   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437828   61676 pod_ready.go:81] duration metric: took 4m0.009294112s waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:17:26.437841   61676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:17:26.437850   61676 pod_ready.go:38] duration metric: took 4m1.606787831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:17:26.437868   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:17:26.437901   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:26.437951   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:26.499917   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.499942   61676 cri.go:89] found id: ""
	I0103 20:17:26.499958   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:26.500014   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.504239   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:26.504290   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:26.539965   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.539992   61676 cri.go:89] found id: ""
	I0103 20:17:26.540001   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:26.540052   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.544591   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:26.544667   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:26.583231   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:26.583256   61676 cri.go:89] found id: ""
	I0103 20:17:26.583265   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:26.583328   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.587642   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:26.587705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:26.625230   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.625258   61676 cri.go:89] found id: ""
	I0103 20:17:26.625267   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:26.625329   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.629448   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:26.629527   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:26.666698   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:26.666726   61676 cri.go:89] found id: ""
	I0103 20:17:26.666736   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:26.666796   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.671434   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:26.671500   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:26.703900   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:26.703921   61676 cri.go:89] found id: ""
	I0103 20:17:26.703929   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:26.703986   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.707915   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:26.707979   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:26.747144   61676 cri.go:89] found id: ""
	I0103 20:17:26.747168   61676 logs.go:284] 0 containers: []
	W0103 20:17:26.747182   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:26.747189   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:26.747246   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:26.786418   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:26.786441   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:26.786448   61676 cri.go:89] found id: ""
	I0103 20:17:26.786456   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:26.786515   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.790506   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.794304   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:26.794330   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:26.851272   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:26.851317   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.894480   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:26.894508   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.941799   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:26.941826   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.981759   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:26.981793   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:27.021318   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:27.021347   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:27.061320   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:27.061351   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:27.110137   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:27.110169   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:27.123548   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:27.123582   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:27.162644   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:27.162678   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:27.211599   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:27.211636   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:27.361299   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:27.361329   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:27.866123   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:27.866166   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:25.753957   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:27.754559   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:25.896204   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:28.395594   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.418870   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:17:30.433778   61676 api_server.go:72] duration metric: took 4m12.637164197s to wait for apiserver process to appear ...
	I0103 20:17:30.433801   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:17:30.433838   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:30.433911   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:30.472309   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:30.472337   61676 cri.go:89] found id: ""
	I0103 20:17:30.472348   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:30.472407   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.476787   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:30.476858   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:30.522290   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:30.522322   61676 cri.go:89] found id: ""
	I0103 20:17:30.522334   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:30.522390   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.526502   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:30.526581   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:30.568301   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.568328   61676 cri.go:89] found id: ""
	I0103 20:17:30.568335   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:30.568382   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.572398   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:30.572454   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:30.611671   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:30.611694   61676 cri.go:89] found id: ""
	I0103 20:17:30.611702   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:30.611749   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.615971   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:30.616035   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:30.658804   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:30.658830   61676 cri.go:89] found id: ""
	I0103 20:17:30.658839   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:30.658889   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.662859   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:30.662930   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:30.705941   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:30.705968   61676 cri.go:89] found id: ""
	I0103 20:17:30.705976   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:30.706031   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.710228   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:30.710308   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:30.749052   61676 cri.go:89] found id: ""
	I0103 20:17:30.749077   61676 logs.go:284] 0 containers: []
	W0103 20:17:30.749088   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:30.749096   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:30.749157   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:30.786239   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.786267   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.786273   61676 cri.go:89] found id: ""
	I0103 20:17:30.786280   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:30.786341   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.790680   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.794294   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:30.794320   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.835916   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:30.835952   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.876225   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:30.876255   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.917657   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:30.917684   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:30.930805   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:30.930831   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:31.060049   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:31.060086   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:31.119725   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:31.119754   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:31.164226   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:31.164261   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:31.204790   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:31.204816   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:31.264949   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:31.264984   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:31.658178   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:31.658217   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:31.712090   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:31.712125   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:31.757333   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:31.757364   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:30.253170   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.753056   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.896380   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.896512   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:35.399775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:34.304692   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:17:34.311338   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:17:34.312603   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:17:34.312624   61676 api_server.go:131] duration metric: took 3.878815002s to wait for apiserver health ...
	I0103 20:17:34.312632   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:17:34.312651   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:34.312705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:34.347683   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.347701   61676 cri.go:89] found id: ""
	I0103 20:17:34.347711   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:34.347769   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.351695   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:34.351773   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:34.386166   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.386188   61676 cri.go:89] found id: ""
	I0103 20:17:34.386197   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:34.386259   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.390352   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:34.390427   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:34.427772   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.427801   61676 cri.go:89] found id: ""
	I0103 20:17:34.427811   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:34.427872   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.432258   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:34.432324   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:34.471746   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.471789   61676 cri.go:89] found id: ""
	I0103 20:17:34.471812   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:34.471878   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.476656   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:34.476729   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:34.514594   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:34.514626   61676 cri.go:89] found id: ""
	I0103 20:17:34.514685   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:34.514779   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.518789   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:34.518849   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:34.555672   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.555698   61676 cri.go:89] found id: ""
	I0103 20:17:34.555707   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:34.555771   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.560278   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:34.560338   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:34.598718   61676 cri.go:89] found id: ""
	I0103 20:17:34.598742   61676 logs.go:284] 0 containers: []
	W0103 20:17:34.598753   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:34.598759   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:34.598810   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:34.635723   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:34.635751   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:34.635758   61676 cri.go:89] found id: ""
	I0103 20:17:34.635767   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:34.635814   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.640466   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.644461   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:34.644490   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:34.659819   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:34.659856   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.697807   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:34.697840   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.745366   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:34.745397   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.804885   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:34.804919   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.848753   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:34.848784   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.891492   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:34.891524   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:35.234093   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:35.234133   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:35.281396   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:35.281425   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:35.317595   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:35.317622   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:35.357552   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:35.357600   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:35.405369   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:35.405394   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:35.459496   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:35.459535   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:38.101844   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:17:38.101870   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.101875   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.101879   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.101886   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.101892   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.101898   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.101907   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.101919   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.101931   61676 system_pods.go:74] duration metric: took 3.789293156s to wait for pod list to return data ...
	I0103 20:17:38.101940   61676 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:17:38.104360   61676 default_sa.go:45] found service account: "default"
	I0103 20:17:38.104386   61676 default_sa.go:55] duration metric: took 2.437157ms for default service account to be created ...
	I0103 20:17:38.104395   61676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:17:38.110198   61676 system_pods.go:86] 8 kube-system pods found
	I0103 20:17:38.110226   61676 system_pods.go:89] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.110233   61676 system_pods.go:89] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.110241   61676 system_pods.go:89] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.110247   61676 system_pods.go:89] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.110254   61676 system_pods.go:89] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.110262   61676 system_pods.go:89] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.110272   61676 system_pods.go:89] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.110287   61676 system_pods.go:89] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.110300   61676 system_pods.go:126] duration metric: took 5.897003ms to wait for k8s-apps to be running ...
	I0103 20:17:38.110310   61676 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:17:38.110359   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:17:38.129025   61676 system_svc.go:56] duration metric: took 18.705736ms WaitForService to wait for kubelet.
	I0103 20:17:38.129071   61676 kubeadm.go:581] duration metric: took 4m20.332460734s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:17:38.129104   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:17:38.132674   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:17:38.132703   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:17:38.132718   61676 node_conditions.go:105] duration metric: took 3.608193ms to run NodePressure ...
	I0103 20:17:38.132803   61676 start.go:228] waiting for startup goroutines ...
	I0103 20:17:38.132830   61676 start.go:233] waiting for cluster config update ...
	I0103 20:17:38.132846   61676 start.go:242] writing updated cluster config ...
	I0103 20:17:38.133198   61676 ssh_runner.go:195] Run: rm -f paused
	I0103 20:17:38.185728   61676 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:17:38.187862   61676 out.go:177] * Done! kubectl is now configured to use "embed-certs-451331" cluster and "default" namespace by default
	I0103 20:17:34.753175   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.254091   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.896317   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:40.396299   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:39.752580   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:41.755418   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:44.253073   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:42.897389   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:45.396646   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:46.253958   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:48.753284   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:47.398164   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:49.895246   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:50.754133   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.253046   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:51.895627   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.897877   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:55.254029   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:57.752707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:56.398655   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:58.897483   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:59.753306   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:01.753500   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:02.255901   62015 pod_ready.go:81] duration metric: took 4m0.010124972s waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:02.255929   62015 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:02.255939   62015 pod_ready.go:38] duration metric: took 4m4.070078749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:02.255957   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:02.255989   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:02.256064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:02.312578   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:02.312606   62015 cri.go:89] found id: ""
	I0103 20:18:02.312616   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:02.312679   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.317969   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:02.318064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:02.361423   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.361451   62015 cri.go:89] found id: ""
	I0103 20:18:02.361464   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:02.361527   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.365691   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:02.365772   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:02.415087   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:02.415118   62015 cri.go:89] found id: ""
	I0103 20:18:02.415128   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:02.415188   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.419409   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:02.419493   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:02.459715   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:02.459744   62015 cri.go:89] found id: ""
	I0103 20:18:02.459754   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:02.459816   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.464105   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:02.464186   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:02.515523   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:02.515547   62015 cri.go:89] found id: ""
	I0103 20:18:02.515556   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:02.515619   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.519586   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:02.519646   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:02.561187   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.561210   62015 cri.go:89] found id: ""
	I0103 20:18:02.561219   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:02.561288   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.566206   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:02.566289   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:02.610993   62015 cri.go:89] found id: ""
	I0103 20:18:02.611019   62015 logs.go:284] 0 containers: []
	W0103 20:18:02.611029   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:02.611036   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:02.611111   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:02.651736   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:02.651764   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:02.651771   62015 cri.go:89] found id: ""
	I0103 20:18:02.651779   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:02.651839   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.656284   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.660614   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:02.660636   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.707759   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:02.707804   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.766498   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:02.766551   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:03.227838   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:03.227884   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:03.269131   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:03.269174   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:03.307383   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:03.307410   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:03.362005   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:03.362043   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:03.412300   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:03.412333   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:03.448896   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:03.448922   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:03.587950   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:03.587982   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:03.629411   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:03.629438   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:03.672468   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:03.672499   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:03.685645   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:03.685682   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:01.395689   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:03.396256   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:06.229417   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:06.244272   62015 api_server.go:72] duration metric: took 4m15.901019711s to wait for apiserver process to appear ...
	I0103 20:18:06.244306   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:06.244351   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:06.244412   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:06.292204   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.292235   62015 cri.go:89] found id: ""
	I0103 20:18:06.292246   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:06.292309   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.296724   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:06.296791   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:06.333984   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.334012   62015 cri.go:89] found id: ""
	I0103 20:18:06.334023   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:06.334079   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.338045   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:06.338123   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:06.374586   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:06.374610   62015 cri.go:89] found id: ""
	I0103 20:18:06.374617   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:06.374669   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.378720   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:06.378792   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:06.416220   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:06.416240   62015 cri.go:89] found id: ""
	I0103 20:18:06.416247   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:06.416300   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.420190   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:06.420247   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:06.458725   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:06.458745   62015 cri.go:89] found id: ""
	I0103 20:18:06.458754   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:06.458808   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.462703   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:06.462759   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:06.504559   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:06.504587   62015 cri.go:89] found id: ""
	I0103 20:18:06.504596   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:06.504659   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.508602   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:06.508662   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:06.559810   62015 cri.go:89] found id: ""
	I0103 20:18:06.559833   62015 logs.go:284] 0 containers: []
	W0103 20:18:06.559840   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:06.559846   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:06.559905   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:06.598672   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.598697   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.598704   62015 cri.go:89] found id: ""
	I0103 20:18:06.598712   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:06.598766   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.602828   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.607033   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:06.607050   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:06.758606   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:06.758634   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.797521   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:06.797552   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:06.856126   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:06.856159   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.902629   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:06.902656   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.953115   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:06.953154   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.993311   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:06.993342   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:07.393614   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:07.393655   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:07.408367   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:07.408397   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:07.446725   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:07.446756   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:07.494564   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:07.494595   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:07.529151   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:07.529176   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:07.577090   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:07.577118   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:05.895682   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:08.395751   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.396488   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.133806   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:18:10.138606   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:18:10.139965   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:18:10.139986   62015 api_server.go:131] duration metric: took 3.895673488s to wait for apiserver health ...
	I0103 20:18:10.140004   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:10.140032   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.140078   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.177309   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.177336   62015 cri.go:89] found id: ""
	I0103 20:18:10.177347   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:10.177398   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.181215   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.181287   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:10.217151   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.217174   62015 cri.go:89] found id: ""
	I0103 20:18:10.217183   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:10.217242   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.221363   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:10.221447   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:10.271359   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:10.271387   62015 cri.go:89] found id: ""
	I0103 20:18:10.271397   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:10.271460   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.277366   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:10.277439   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:10.325567   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:10.325594   62015 cri.go:89] found id: ""
	I0103 20:18:10.325604   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:10.325662   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.331222   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:10.331292   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:10.370488   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:10.370516   62015 cri.go:89] found id: ""
	I0103 20:18:10.370539   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:10.370598   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.375213   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:10.375272   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:10.417606   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.417626   62015 cri.go:89] found id: ""
	I0103 20:18:10.417633   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:10.417678   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.421786   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:10.421848   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:10.459092   62015 cri.go:89] found id: ""
	I0103 20:18:10.459119   62015 logs.go:284] 0 containers: []
	W0103 20:18:10.459129   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:10.459136   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:10.459184   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:10.504845   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:10.504874   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.504879   62015 cri.go:89] found id: ""
	I0103 20:18:10.504886   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:10.504935   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.509189   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.513671   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:10.513692   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.553961   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:10.553988   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:10.606422   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:10.606463   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:10.620647   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:10.620677   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.678322   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:10.678358   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:10.806514   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:10.806569   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:10.862551   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:10.862589   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.917533   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:10.917566   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.988668   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:10.988702   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:11.030485   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.030549   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:11.425651   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:11.425686   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:11.481991   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:11.482019   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:11.526299   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:11.526335   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:14.082821   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:14.082847   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.082853   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.082857   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.082861   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.082865   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.082870   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.082876   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.082881   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.082887   62015 system_pods.go:74] duration metric: took 3.942878112s to wait for pod list to return data ...
	I0103 20:18:14.082893   62015 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:14.087079   62015 default_sa.go:45] found service account: "default"
	I0103 20:18:14.087106   62015 default_sa.go:55] duration metric: took 4.207195ms for default service account to be created ...
	I0103 20:18:14.087115   62015 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:14.094161   62015 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:14.094185   62015 system_pods.go:89] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.094190   62015 system_pods.go:89] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.094195   62015 system_pods.go:89] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.094199   62015 system_pods.go:89] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.094204   62015 system_pods.go:89] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.094208   62015 system_pods.go:89] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.094219   62015 system_pods.go:89] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.094231   62015 system_pods.go:89] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.094244   62015 system_pods.go:126] duration metric: took 7.123869ms to wait for k8s-apps to be running ...
	I0103 20:18:14.094256   62015 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:14.094305   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:14.110365   62015 system_svc.go:56] duration metric: took 16.099582ms WaitForService to wait for kubelet.
	I0103 20:18:14.110400   62015 kubeadm.go:581] duration metric: took 4m23.767155373s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:14.110439   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:14.113809   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:14.113833   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:14.113842   62015 node_conditions.go:105] duration metric: took 3.394645ms to run NodePressure ...
	I0103 20:18:14.113853   62015 start.go:228] waiting for startup goroutines ...
	I0103 20:18:14.113859   62015 start.go:233] waiting for cluster config update ...
	I0103 20:18:14.113868   62015 start.go:242] writing updated cluster config ...
	I0103 20:18:14.114102   62015 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:14.163090   62015 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0103 20:18:14.165173   62015 out.go:177] * Done! kubectl is now configured to use "no-preload-749210" cluster and "default" namespace by default
	I0103 20:18:10.896026   62050 pod_ready.go:81] duration metric: took 4m0.007814497s waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:10.896053   62050 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:10.896062   62050 pod_ready.go:38] duration metric: took 4m4.550955933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:10.896076   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:10.896109   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.896169   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.965458   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:10.965485   62050 cri.go:89] found id: ""
	I0103 20:18:10.965494   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:10.965552   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.970818   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.970890   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:11.014481   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.014509   62050 cri.go:89] found id: ""
	I0103 20:18:11.014537   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:11.014602   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.019157   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:11.019220   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:11.068101   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:11.068129   62050 cri.go:89] found id: ""
	I0103 20:18:11.068138   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:11.068202   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.075018   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:11.075098   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:11.122838   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.122862   62050 cri.go:89] found id: ""
	I0103 20:18:11.122871   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:11.122925   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.128488   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:11.128563   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:11.178133   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:11.178160   62050 cri.go:89] found id: ""
	I0103 20:18:11.178170   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:11.178233   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.182823   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:11.182900   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:11.229175   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.229207   62050 cri.go:89] found id: ""
	I0103 20:18:11.229218   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:11.229271   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.238617   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:11.238686   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:11.289070   62050 cri.go:89] found id: ""
	I0103 20:18:11.289107   62050 logs.go:284] 0 containers: []
	W0103 20:18:11.289115   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:11.289121   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:11.289204   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:11.333342   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.333365   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.333370   62050 cri.go:89] found id: ""
	I0103 20:18:11.333376   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:11.333430   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.338236   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.342643   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:11.342668   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:11.395443   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:11.395471   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:11.561224   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:11.561258   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.619642   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:11.619677   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.656329   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:11.656370   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.710651   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:11.710685   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.756839   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:11.756866   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.791885   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:11.791920   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:11.805161   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.805185   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:12.261916   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:12.261973   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:12.316486   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:12.316525   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:12.367998   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:12.368032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:12.404277   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:12.404316   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:14.943727   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:14.959322   62050 api_server.go:72] duration metric: took 4m14.593955756s to wait for apiserver process to appear ...
	I0103 20:18:14.959344   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:14.959384   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:14.959443   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:15.001580   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.001613   62050 cri.go:89] found id: ""
	I0103 20:18:15.001624   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:15.001688   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.005964   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:15.006044   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:15.043364   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:15.043393   62050 cri.go:89] found id: ""
	I0103 20:18:15.043403   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:15.043461   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.047226   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:15.047291   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:15.091700   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.091727   62050 cri.go:89] found id: ""
	I0103 20:18:15.091736   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:15.091794   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.095953   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:15.096038   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:15.132757   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:15.132785   62050 cri.go:89] found id: ""
	I0103 20:18:15.132796   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:15.132856   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.137574   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:15.137637   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:15.174799   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:15.174827   62050 cri.go:89] found id: ""
	I0103 20:18:15.174836   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:15.174893   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.179052   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:15.179119   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:15.218730   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:15.218761   62050 cri.go:89] found id: ""
	I0103 20:18:15.218770   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:15.218829   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.222730   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:15.222796   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:15.265020   62050 cri.go:89] found id: ""
	I0103 20:18:15.265046   62050 logs.go:284] 0 containers: []
	W0103 20:18:15.265053   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:15.265059   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:15.265122   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:15.307032   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:15.307059   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.307065   62050 cri.go:89] found id: ""
	I0103 20:18:15.307074   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:15.307132   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.311275   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.315089   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:15.315113   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:15.361815   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:15.361840   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:15.493913   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:15.493947   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.553841   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:15.553881   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.590885   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:15.590911   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.630332   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:15.630357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:16.074625   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:16.074659   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:16.133116   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:16.133161   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:16.147559   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:16.147585   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:16.199131   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:16.199167   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:16.238085   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:16.238116   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:16.294992   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:16.295032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:16.333862   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:16.333896   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:18.875707   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:18:18.882546   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:18:18.884633   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:18:18.884662   62050 api_server.go:131] duration metric: took 3.925311693s to wait for apiserver health ...
	I0103 20:18:18.884672   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:18.884701   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:18.884765   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:18.922149   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:18.922170   62050 cri.go:89] found id: ""
	I0103 20:18:18.922177   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:18.922223   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.926886   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:18.926952   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:18.970009   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:18.970035   62050 cri.go:89] found id: ""
	I0103 20:18:18.970043   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:18.970107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.974349   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:18.974413   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:19.016970   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:19.016994   62050 cri.go:89] found id: ""
	I0103 20:18:19.017004   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:19.017069   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.021899   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:19.021959   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:19.076044   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:19.076074   62050 cri.go:89] found id: ""
	I0103 20:18:19.076081   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:19.076134   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.081699   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:19.081775   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:19.120022   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:19.120046   62050 cri.go:89] found id: ""
	I0103 20:18:19.120053   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:19.120107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.124627   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:19.124698   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:19.165431   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:19.165453   62050 cri.go:89] found id: ""
	I0103 20:18:19.165463   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:19.165513   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.170214   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:19.170282   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:19.208676   62050 cri.go:89] found id: ""
	I0103 20:18:19.208706   62050 logs.go:284] 0 containers: []
	W0103 20:18:19.208716   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:19.208724   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:19.208782   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:19.246065   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:19.246092   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:19.246101   62050 cri.go:89] found id: ""
	I0103 20:18:19.246109   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:19.246169   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.250217   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.259598   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:19.259628   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:19.643718   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:19.643755   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:19.697873   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:19.697905   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:19.762995   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:19.763030   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:19.830835   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:19.830871   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:19.969465   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:19.969501   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:20.011269   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:20.011301   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:20.059317   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:20.059352   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:20.099428   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:20.099455   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:20.135773   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:20.135809   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:20.149611   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:20.149649   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:20.190742   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:20.190788   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:20.241115   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:20.241142   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:22.789475   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:22.789502   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.789507   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.789512   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.789516   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.789520   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.789527   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.789533   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.789538   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.789544   62050 system_pods.go:74] duration metric: took 3.904866616s to wait for pod list to return data ...
	I0103 20:18:22.789551   62050 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:22.791976   62050 default_sa.go:45] found service account: "default"
	I0103 20:18:22.792000   62050 default_sa.go:55] duration metric: took 2.444229ms for default service account to be created ...
	I0103 20:18:22.792007   62050 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:22.797165   62050 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:22.797186   62050 system_pods.go:89] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.797192   62050 system_pods.go:89] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.797196   62050 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.797200   62050 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.797204   62050 system_pods.go:89] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.797209   62050 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.797221   62050 system_pods.go:89] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.797234   62050 system_pods.go:89] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.797244   62050 system_pods.go:126] duration metric: took 5.231578ms to wait for k8s-apps to be running ...
	I0103 20:18:22.797256   62050 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:22.797303   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:22.811467   62050 system_svc.go:56] duration metric: took 14.201511ms WaitForService to wait for kubelet.
	I0103 20:18:22.811503   62050 kubeadm.go:581] duration metric: took 4m22.446143128s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:22.811533   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:22.814594   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:22.814617   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:22.814629   62050 node_conditions.go:105] duration metric: took 3.089727ms to run NodePressure ...
	I0103 20:18:22.814639   62050 start.go:228] waiting for startup goroutines ...
	I0103 20:18:22.814645   62050 start.go:233] waiting for cluster config update ...
	I0103 20:18:22.814654   62050 start.go:242] writing updated cluster config ...
	I0103 20:18:22.814897   62050 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:22.864761   62050 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:18:22.866755   62050 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018788" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 20:13:42 UTC, ends at Wed 2024-01-03 20:23:33 UTC. --
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.511988323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=087f345d-9a44-4fc4-a6a0-2b8720269578 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.513368417Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=42df8075-8f17-4420-b6a9-da7232df4260 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.513845467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313413513828614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=42df8075-8f17-4420-b6a9-da7232df4260 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.514746786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=01fee75f-db52-481f-9154-18e64a0ebec4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.514801335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=01fee75f-db52-481f-9154-18e64a0ebec4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.515000520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:320393ddb07553eb44a54c112d172ce04185d7ac58e27c5d44217b4711153907,PodSandboxId:7c321163110595ffe03bfd0c93467e79648b641fbd7ffaf14461512cc89dba61,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312866461634949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec52ba5e-d926-4b8f-abb8-0381cf3f985a,},Annotations:map[string]string{io.kubernetes.container.hash: d91788b,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45917899cfd41ece572d722e8d76510aa569a5b9a80e7899d35c3844125855b6,PodSandboxId:d34f9861cf860e8552cc8b0f865e95e6c7acda606aa03eb31e00ebd5afb34591,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704312863835091625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nvbsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22884cc1-f360-4ee8-bafc-340bb24faa41,},Annotations:map[string]string{io.kubernetes.container.hash: 1b28c3cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7169f167164d608b443918e6d53248d93a1f5d91d15c4db2f35a6bc93ee1be3,PodSandboxId:e6ed96711a089716a954eb12c0f266dc158499cd4ba9a4d239004e387003ed42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312862364601363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 4157ff41-1b3b-4eb7-b23b-2de69398161c,},Annotations:map[string]string{io.kubernetes.container.hash: 70e97194,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a196e4fc88e5e12ebea815c63f5444bdf901c0f88e5e48f515af4a095def802,PodSandboxId:2eb19fa47dc53b41e9c56d34b8d9a4400c037efadaca09b0a7544baf9a66b148,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704312861740798805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jk7jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef720f69-1bfd-4e75-9943-
ff7ee3145ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 8a94f92b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a40bb274f500d3acbfd95cef5b55e0ea95441522e180afffcc40eaf2605db1,PodSandboxId:f8dee6e4f3ff62e9f966be9cabc065cb086203b28be0cc63887f0dcd958af645,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704312854877894597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe1bb94b97e48f63d9431bddbebf185,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe931f92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82afd69651caaa0dee810c76dd80ddd78630b9ffab8e30e5edd67a82dba78b7,PodSandboxId:c0bdb285cbdce3946787cdb8ae3cf14bda0957ddc972b254fccbfeffac7e06b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704312853747571570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations
:map[string]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8babac0762b1da3e7fc5037f5d7cf07ab1bf456ae68951526a6123c7249f18c,PodSandboxId:0970fde04b7f743edc8b79467f4d1b419ace87ff650728f2b5bccbeede0a9e90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704312853609423470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string
]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40cdf59c968e44473516fdcc829b115c30ac1c817dafebc6dcf8b22fe28171b3,PodSandboxId:4d52f9a6f958830d7b7944f26eafee1430b4f6e21c49fa231e958d49f1e5135c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704312853354886697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12703be281c3cbcafa1a958acc881c41,},Annotations:map[string]string{io.
kubernetes.container.hash: 95ed9a70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=01fee75f-db52-481f-9154-18e64a0ebec4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.556752826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=31986e57-03b3-4e82-85ce-7e147b97e6c3 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.556837993Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=31986e57-03b3-4e82-85ce-7e147b97e6c3 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.558309308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6362af05-cdf0-42f0-842b-e22f58b1b0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.558927226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313413558906017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=6362af05-cdf0-42f0-842b-e22f58b1b0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.559977592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eb445f70-a8bd-47df-9990-941dcb34e001 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.560045314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb445f70-a8bd-47df-9990-941dcb34e001 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.560294104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:320393ddb07553eb44a54c112d172ce04185d7ac58e27c5d44217b4711153907,PodSandboxId:7c321163110595ffe03bfd0c93467e79648b641fbd7ffaf14461512cc89dba61,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312866461634949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec52ba5e-d926-4b8f-abb8-0381cf3f985a,},Annotations:map[string]string{io.kubernetes.container.hash: d91788b,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45917899cfd41ece572d722e8d76510aa569a5b9a80e7899d35c3844125855b6,PodSandboxId:d34f9861cf860e8552cc8b0f865e95e6c7acda606aa03eb31e00ebd5afb34591,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704312863835091625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nvbsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22884cc1-f360-4ee8-bafc-340bb24faa41,},Annotations:map[string]string{io.kubernetes.container.hash: 1b28c3cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7169f167164d608b443918e6d53248d93a1f5d91d15c4db2f35a6bc93ee1be3,PodSandboxId:e6ed96711a089716a954eb12c0f266dc158499cd4ba9a4d239004e387003ed42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312862364601363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 4157ff41-1b3b-4eb7-b23b-2de69398161c,},Annotations:map[string]string{io.kubernetes.container.hash: 70e97194,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a196e4fc88e5e12ebea815c63f5444bdf901c0f88e5e48f515af4a095def802,PodSandboxId:2eb19fa47dc53b41e9c56d34b8d9a4400c037efadaca09b0a7544baf9a66b148,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704312861740798805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jk7jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef720f69-1bfd-4e75-9943-
ff7ee3145ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 8a94f92b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a40bb274f500d3acbfd95cef5b55e0ea95441522e180afffcc40eaf2605db1,PodSandboxId:f8dee6e4f3ff62e9f966be9cabc065cb086203b28be0cc63887f0dcd958af645,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704312854877894597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe1bb94b97e48f63d9431bddbebf185,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe931f92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82afd69651caaa0dee810c76dd80ddd78630b9ffab8e30e5edd67a82dba78b7,PodSandboxId:c0bdb285cbdce3946787cdb8ae3cf14bda0957ddc972b254fccbfeffac7e06b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704312853747571570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations
:map[string]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8babac0762b1da3e7fc5037f5d7cf07ab1bf456ae68951526a6123c7249f18c,PodSandboxId:0970fde04b7f743edc8b79467f4d1b419ace87ff650728f2b5bccbeede0a9e90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704312853609423470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string
]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40cdf59c968e44473516fdcc829b115c30ac1c817dafebc6dcf8b22fe28171b3,PodSandboxId:4d52f9a6f958830d7b7944f26eafee1430b4f6e21c49fa231e958d49f1e5135c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704312853354886697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12703be281c3cbcafa1a958acc881c41,},Annotations:map[string]string{io.
kubernetes.container.hash: 95ed9a70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb445f70-a8bd-47df-9990-941dcb34e001 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.576815971Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=3b3628bb-6eb6-49e7-9220-56ecd3debf06 name=/runtime.v1alpha2.ImageService/ListImages
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577039509Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577161908Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577207958Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577257082Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577302840Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577370692Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577410294Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577509773Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577585387Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577639019Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"" file="storage/storage_transport.go:185"
	Jan 03 20:23:33 old-k8s-version-927922 crio[717]: time="2024-01-03 20:23:33.577794070Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,RepoTags:[k8s.gcr.io/kube-apiserver:v1.16.0],RepoDigests:[k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6 k8s.gcr.io/kube-apiserver@sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd],Size_:218626356,Uid:nil,Username:,Spec:nil,},&Image{Id:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,RepoTags:[k8s.gcr.io/kube-controller-manager:v1.16.0],RepoDigests:[k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4 k8s.gcr.io/kube-controller-manager@sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e],Size_:164869446,Uid:nil,Username:,Spec:nil,},&Image{Id:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,RepoTags:[k8
s.gcr.io/kube-scheduler:v1.16.0],RepoDigests:[k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0 k8s.gcr.io/kube-scheduler@sha256:3c3b28b0a7b08893718d93cbf533928aa0b69cb3669856eabab0021c2dcb68c3],Size_:88825138,Uid:nil,Username:,Spec:nil,},&Image{Id:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,RepoTags:[k8s.gcr.io/kube-proxy:v1.16.0],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c k8s.gcr.io/kube-proxy@sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794],Size_:87920566,Uid:nil,Username:,Spec:nil,},&Image{Id:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e,RepoTags:[k8s.gcr.io/pause:3.1 registry.k8s.io/pause:3.1],RepoDigests:[k8s.gcr.io/pause@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a44
07a5686c46983a2c2eeed83929b888179acea registry.k8s.io/pause@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e registry.k8s.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea],Size_:749103,Uid:nil,Username:,Spec:nil,},&Image{Id:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,RepoTags:[k8s.gcr.io/etcd:3.3.15-0],RepoDigests:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd@sha256:37a8acab63de5556d47bfbe76d649ae63f83ea7481584a2be0dbffb77825f692],Size_:248212167,Uid:nil,Username:,Spec:nil,},&Image{Id:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,RepoTags:[k8s.gcr.io/coredns:1.6.2],RepoDigests:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns@sha256:4dd4d0e5bcc9bd0e8189f6fa4d4965ffa81207d8d99d29391f28cbd1a70a0163],Size_:44
231648,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,RepoTags:[docker.io/kindest/kindnetd:v20210326-1e038dc5],RepoDigests:[docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1 docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c],Size_:119984626,Uid:nil,Username:,Spec:nil,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb992
50061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Username:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=3b3628bb-6eb6-49e7-9220-56ecd3debf06 name=/runtime.v1alpha2.ImageService/ListImages
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	320393ddb0755       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                   0                   7c32116311059       busybox
	45917899cfd41       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      9 minutes ago       Running             coredns                   0                   d34f9861cf860       coredns-5644d7b6d9-nvbsl
	b7169f167164d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Running             storage-provisioner       0                   e6ed96711a089       storage-provisioner
	7a196e4fc88e5       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      9 minutes ago       Running             kube-proxy                0                   2eb19fa47dc53       kube-proxy-jk7jw
	c8a40bb274f50       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      9 minutes ago       Running             etcd                      0                   f8dee6e4f3ff6       etcd-old-k8s-version-927922
	a82afd69651ca       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      9 minutes ago       Running             kube-controller-manager   0                   c0bdb285cbdce       kube-controller-manager-old-k8s-version-927922
	f8babac0762b1       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      9 minutes ago       Running             kube-scheduler            0                   0970fde04b7f7       kube-scheduler-old-k8s-version-927922
	40cdf59c968e4       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      9 minutes ago       Running             kube-apiserver            0                   4d52f9a6f9588       kube-apiserver-old-k8s-version-927922
	
	
	==> coredns [45917899cfd41ece572d722e8d76510aa569a5b9a80e7899d35c3844125855b6] <==
	E0103 20:04:34.191021       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.198189       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.190923       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0103 20:04:34.190824       1 trace.go:82] Trace[859965114]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-03 20:04:04.190460127 +0000 UTC m=+0.247147512) (total time: 30.000323536s):
	Trace[859965114]: [30.000323536s] [30.000323536s] END
	E0103 20:04:34.191021       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.191021       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.191021       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0103 20:04:34.198057       1 trace.go:82] Trace[1179518053]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-03 20:04:04.189200746 +0000 UTC m=+0.245888159) (total time: 30.008836728s):
	Trace[1179518053]: [30.008836728s] [30.008836728s] END
	E0103 20:04:34.198189       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.198189       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.198189       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	2024-01-03T20:04:34.587Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	2024-01-03T20:04:39.128Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-03T20:14:24.087Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2024-01-03T20:14:24.087Z [INFO] CoreDNS-1.6.2
	2024-01-03T20:14:24.087Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-03T20:14:24.097Z [INFO] 127.0.0.1:58358 - 7510 "HINFO IN 2319616804106500077.1178016545245940769. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009293278s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-927922
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-927922
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=old-k8s-version-927922
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_03_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:03:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:22:49 +0000   Wed, 03 Jan 2024 20:03:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:22:49 +0000   Wed, 03 Jan 2024 20:03:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:22:49 +0000   Wed, 03 Jan 2024 20:03:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:22:49 +0000   Wed, 03 Jan 2024 20:14:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.12
	  Hostname:    old-k8s-version-927922
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 ce300228261d46a38a32a0015400aff0
	 System UUID:                ce300228-261d-46a3-8a32-a0015400aff0
	 Boot ID:                    3e6c84e4-38e8-4e0b-90ee-ebf292985fe7
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                coredns-5644d7b6d9-nvbsl                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                etcd-old-k8s-version-927922                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-apiserver-old-k8s-version-927922             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-controller-manager-old-k8s-version-927922    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                kube-proxy-jk7jw                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-scheduler-old-k8s-version-927922             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                metrics-server-74d5856cc6-kqzhm                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         8m57s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x7 over 19m)      kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)      kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                    kube-proxy, old-k8s-version-927922  Starting kube-proxy.
	  Normal  Starting                 9m21s                  kubelet, old-k8s-version-927922     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s (x8 over 9m21s)  kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s (x8 over 9m21s)  kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s (x7 over 9m21s)  kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet, old-k8s-version-927922     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m12s                  kube-proxy, old-k8s-version-927922  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 3 20:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070593] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.548648] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.804157] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153618] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.406103] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.023217] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.175537] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.214652] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.168851] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.235349] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[Jan 3 20:14] systemd-fstab-generator[1032]: Ignoring "noauto" for root device
	[  +0.420984] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +23.762124] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [c8a40bb274f500d3acbfd95cef5b55e0ea95441522e180afffcc40eaf2605db1] <==
	2024-01-03 20:14:14.995626 I | etcdserver: heartbeat = 100ms
	2024-01-03 20:14:14.995722 I | etcdserver: election = 1000ms
	2024-01-03 20:14:14.995743 I | etcdserver: snapshot count = 10000
	2024-01-03 20:14:14.995832 I | etcdserver: advertise client URLs = https://192.168.72.12:2379
	2024-01-03 20:14:15.004373 I | etcdserver: restarting member ab05bc745795456d in cluster 800e3fcdc6b6742c at commit index 538
	2024-01-03 20:14:15.004563 I | raft: ab05bc745795456d became follower at term 2
	2024-01-03 20:14:15.004647 I | raft: newRaft ab05bc745795456d [peers: [], term: 2, commit: 538, applied: 0, lastindex: 538, lastterm: 2]
	2024-01-03 20:14:15.017386 W | auth: simple token is not cryptographically signed
	2024-01-03 20:14:15.020294 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-03 20:14:15.021797 I | etcdserver/membership: added member ab05bc745795456d [https://192.168.72.12:2380] to cluster 800e3fcdc6b6742c
	2024-01-03 20:14:15.021927 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-03 20:14:15.021970 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-03 20:14:15.027235 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-03 20:14:15.027750 I | embed: listening for metrics on http://192.168.72.12:2381
	2024-01-03 20:14:15.027825 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-03 20:14:16.305389 I | raft: ab05bc745795456d is starting a new election at term 2
	2024-01-03 20:14:16.305546 I | raft: ab05bc745795456d became candidate at term 3
	2024-01-03 20:14:16.305575 I | raft: ab05bc745795456d received MsgVoteResp from ab05bc745795456d at term 3
	2024-01-03 20:14:16.305597 I | raft: ab05bc745795456d became leader at term 3
	2024-01-03 20:14:16.305614 I | raft: raft.node: ab05bc745795456d elected leader ab05bc745795456d at term 3
	2024-01-03 20:14:16.305927 I | etcdserver: published {Name:old-k8s-version-927922 ClientURLs:[https://192.168.72.12:2379]} to cluster 800e3fcdc6b6742c
	2024-01-03 20:14:16.306261 I | embed: ready to serve client requests
	2024-01-03 20:14:16.306510 I | embed: ready to serve client requests
	2024-01-03 20:14:16.307442 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-03 20:14:16.308880 I | embed: serving client requests on 192.168.72.12:2379
	
	
	==> kernel <==
	 20:23:33 up 9 min,  0 users,  load average: 0.03, 0.09, 0.08
	Linux old-k8s-version-927922 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [40cdf59c968e44473516fdcc829b115c30ac1c817dafebc6dcf8b22fe28171b3] <==
	I0103 20:15:21.257389       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:15:21.257562       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:15:21.257652       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:15:21.257669       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:17:21.258196       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:17:21.258694       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:17:21.258847       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:17:21.258884       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:19:20.573113       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:19:20.573279       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:19:20.573361       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:19:20.573372       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:20:20.573661       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:20:20.573811       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:20:20.573856       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:20:20.573876       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:22:20.574315       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:22:20.574492       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:22:20.574568       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:22:20.574595       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a82afd69651caaa0dee810c76dd80ddd78630b9ffab8e30e5edd67a82dba78b7] <==
	E0103 20:17:08.587999       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:17:19.424229       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:17:38.840203       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:17:51.426652       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:18:09.092181       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:18:23.432849       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:18:39.344286       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:18:55.435331       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:19:09.595907       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:19:27.437712       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:19:39.847689       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:19:59.439780       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:20:10.100071       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:20:31.441783       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:20:40.351973       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:21:03.443877       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:21:10.603749       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:21:35.446255       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:21:40.855839       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:22:07.448363       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:22:11.107971       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:22:39.450822       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:22:41.359862       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:23:11.453829       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:23:11.611991       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [7a196e4fc88e5e12ebea815c63f5444bdf901c0f88e5e48f515af4a095def802] <==
	W0103 20:04:05.316998       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0103 20:04:05.331408       1 node.go:135] Successfully retrieved node IP: 192.168.72.12
	I0103 20:04:05.331476       1 server_others.go:149] Using iptables Proxier.
	I0103 20:04:05.331887       1 server.go:529] Version: v1.16.0
	I0103 20:04:05.339499       1 config.go:313] Starting service config controller
	I0103 20:04:05.339547       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0103 20:04:05.340541       1 config.go:131] Starting endpoints config controller
	I0103 20:04:05.340587       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0103 20:04:05.441275       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0103 20:04:05.441335       1 shared_informer.go:204] Caches are synced for service config 
	E0103 20:05:21.361533       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=498&timeout=7m2s&timeoutSeconds=422&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.362455       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=499&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	W0103 20:14:21.919840       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0103 20:14:21.930134       1 node.go:135] Successfully retrieved node IP: 192.168.72.12
	I0103 20:14:21.930187       1 server_others.go:149] Using iptables Proxier.
	I0103 20:14:21.930793       1 server.go:529] Version: v1.16.0
	I0103 20:14:21.935276       1 config.go:313] Starting service config controller
	I0103 20:14:21.937684       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0103 20:14:21.935293       1 config.go:131] Starting endpoints config controller
	I0103 20:14:21.938145       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0103 20:14:22.040210       1 shared_informer.go:204] Caches are synced for service config 
	I0103 20:14:22.040416       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [f8babac0762b1da3e7fc5037f5d7cf07ab1bf456ae68951526a6123c7249f18c] <==
	E0103 20:03:43.414176       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 20:03:43.414504       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 20:03:43.415209       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 20:05:21.302955       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=481&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304245       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=351&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304346       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304411       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304470       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=440&timeout=6m40s&timeoutSeconds=400&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304531       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m36s&timeoutSeconds=576&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304581       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=498&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304627       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=8m51s&timeoutSeconds=531&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304696       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304773       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&resourceVersion=473&timeoutSeconds=443&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.309721       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	I0103 20:14:14.750765       1 serving.go:319] Generated self-signed cert in-memory
	W0103 20:14:19.571751       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:14:19.573092       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:14:19.573153       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:14:19.573182       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:14:19.585161       1 server.go:143] Version: v1.16.0
	I0103 20:14:19.585383       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0103 20:14:19.596189       1 authorization.go:47] Authorization is disabled
	W0103 20:14:19.596264       1 authentication.go:79] Authentication is disabled
	I0103 20:14:19.596288       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0103 20:14:19.597084       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 20:13:42 UTC, ends at Wed 2024-01-03 20:23:34 UTC. --
	Jan 03 20:19:10 old-k8s-version-927922 kubelet[1038]: E0103 20:19:10.480723    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:19:12 old-k8s-version-927922 kubelet[1038]: E0103 20:19:12.552210    1038 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 03 20:19:23 old-k8s-version-927922 kubelet[1038]: E0103 20:19:23.478570    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:19:35 old-k8s-version-927922 kubelet[1038]: E0103 20:19:35.478799    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:19:50 old-k8s-version-927922 kubelet[1038]: E0103 20:19:50.478930    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:20:02 old-k8s-version-927922 kubelet[1038]: E0103 20:20:02.479012    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:20:13 old-k8s-version-927922 kubelet[1038]: E0103 20:20:13.490790    1038 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 03 20:20:13 old-k8s-version-927922 kubelet[1038]: E0103 20:20:13.490920    1038 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 03 20:20:13 old-k8s-version-927922 kubelet[1038]: E0103 20:20:13.491001    1038 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 03 20:20:13 old-k8s-version-927922 kubelet[1038]: E0103 20:20:13.491042    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 03 20:20:24 old-k8s-version-927922 kubelet[1038]: E0103 20:20:24.478967    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:20:35 old-k8s-version-927922 kubelet[1038]: E0103 20:20:35.479003    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:20:49 old-k8s-version-927922 kubelet[1038]: E0103 20:20:49.479891    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:21:03 old-k8s-version-927922 kubelet[1038]: E0103 20:21:03.478606    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:21:15 old-k8s-version-927922 kubelet[1038]: E0103 20:21:15.478995    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:21:29 old-k8s-version-927922 kubelet[1038]: E0103 20:21:29.478782    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:21:41 old-k8s-version-927922 kubelet[1038]: E0103 20:21:41.479380    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:21:55 old-k8s-version-927922 kubelet[1038]: E0103 20:21:55.479278    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:22:10 old-k8s-version-927922 kubelet[1038]: E0103 20:22:10.479796    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:22:22 old-k8s-version-927922 kubelet[1038]: E0103 20:22:22.479159    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:22:34 old-k8s-version-927922 kubelet[1038]: E0103 20:22:34.480130    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:22:45 old-k8s-version-927922 kubelet[1038]: E0103 20:22:45.478971    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:22:58 old-k8s-version-927922 kubelet[1038]: E0103 20:22:58.480330    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:23:11 old-k8s-version-927922 kubelet[1038]: E0103 20:23:11.478663    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:23:22 old-k8s-version-927922 kubelet[1038]: E0103 20:23:22.479680    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [b7169f167164d608b443918e6d53248d93a1f5d91d15c4db2f35a6bc93ee1be3] <==
	I0103 20:04:05.743846       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:04:05.765226       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:04:05.765379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:04:05.780549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:04:05.781639       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-927922_40b0ea06-1db7-4d9c-9667-99fc64ff8309!
	I0103 20:04:05.784315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3289e9e-95c1-435d-9042-5a2215b61059", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-927922_40b0ea06-1db7-4d9c-9667-99fc64ff8309 became leader
	I0103 20:04:05.882551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-927922_40b0ea06-1db7-4d9c-9667-99fc64ff8309!
	I0103 20:14:22.467660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:14:22.481894       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:14:22.481982       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:14:39.886080       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:14:39.886293       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-927922_71e43d0a-bca4-4c20-9c43-10ee3df29725!
	I0103 20:14:39.891098       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3289e9e-95c1-435d-9042-5a2215b61059", APIVersion:"v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-927922_71e43d0a-bca4-4c20-9c43-10ee3df29725 became leader
	I0103 20:14:39.987634       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-927922_71e43d0a-bca4-4c20-9c43-10ee3df29725!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-927922 -n old-k8s-version-927922
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-927922 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-kqzhm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-927922 describe pod metrics-server-74d5856cc6-kqzhm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-927922 describe pod metrics-server-74d5856cc6-kqzhm: exit status 1 (83.737832ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-kqzhm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-927922 describe pod metrics-server-74d5856cc6-kqzhm: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-451331 -n embed-certs-451331
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-03 20:26:38.792120436 +0000 UTC m=+5364.264697449
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451331 -n embed-certs-451331
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-451331 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-451331 logs -n 25: (1.621626085s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-719541 sudo cat                              | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo find                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo crio                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-719541                                       | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-350596 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | disable-driver-mounts-350596                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:06 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-927922        | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451331            | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-749210             | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018788  | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-927922             | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC | 03 Jan 24 20:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451331                 | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC | 03 Jan 24 20:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-749210                  | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018788       | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:09:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:09:05.502375   62050 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:09:05.502548   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502558   62050 out.go:309] Setting ErrFile to fd 2...
	I0103 20:09:05.502566   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502759   62050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:09:05.503330   62050 out.go:303] Setting JSON to false
	I0103 20:09:05.504222   62050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6693,"bootTime":1704305853,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 20:09:05.504283   62050 start.go:138] virtualization: kvm guest
	I0103 20:09:05.507002   62050 out.go:177] * [default-k8s-diff-port-018788] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 20:09:05.508642   62050 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:09:05.508667   62050 notify.go:220] Checking for updates...
	I0103 20:09:05.510296   62050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:09:05.511927   62050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:09:05.513487   62050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:09:05.515064   62050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 20:09:05.516515   62050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:09:05.518301   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:09:05.518774   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.518827   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.533730   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0103 20:09:05.534098   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.534667   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.534699   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.535027   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.535298   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.535543   62050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:09:05.535823   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.535855   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.549808   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I0103 20:09:05.550147   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.550708   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.550733   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.551041   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.551258   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.583981   62050 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 20:09:05.585560   62050 start.go:298] selected driver: kvm2
	I0103 20:09:05.585580   62050 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.585707   62050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:09:05.586411   62050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.586494   62050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 20:09:05.601346   62050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 20:09:05.601747   62050 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 20:09:05.601812   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:09:05.601828   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:09:05.601839   62050 start_flags.go:323] config:
	{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-01878
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.602011   62050 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.604007   62050 out.go:177] * Starting control plane node default-k8s-diff-port-018788 in cluster default-k8s-diff-port-018788
	I0103 20:09:03.174819   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:06.246788   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:04.840696   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:09:04.840826   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:09:04.840950   62015 cache.go:107] acquiring lock: {Name:mk76774936d94ce826f83ee0faaaf3557831e6bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.840994   62015 cache.go:107] acquiring lock: {Name:mk25b47a2b083e99837dbc206b0832b20d7da669 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841017   62015 cache.go:107] acquiring lock: {Name:mk0a26120b5274bc796f1ae286da54dda262a5a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841059   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0103 20:09:04.841064   62015 start.go:365] acquiring machines lock for no-preload-749210: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:04.841070   62015 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 128.344µs
	I0103 20:09:04.841078   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0103 20:09:04.841081   62015 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841085   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0103 20:09:04.840951   62015 cache.go:107] acquiring lock: {Name:mk372d2259ddc4c784d2a14a7416ba9b749d6f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841089   62015 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 97.811µs
	I0103 20:09:04.841093   62015 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 87.964µs
	I0103 20:09:04.841108   62015 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0103 20:09:04.841109   62015 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0103 20:09:04.841115   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 20:09:04.841052   62015 cache.go:107] acquiring lock: {Name:mk04d21d7cdef9332755ef804a44022ba9c4a8c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841129   62015 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 185.143µs
	I0103 20:09:04.841155   62015 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 20:09:04.841139   62015 cache.go:107] acquiring lock: {Name:mk5c34e1c9b00efde01e776962411ad1105596ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841183   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0103 20:09:04.841203   62015 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 176.832µs
	I0103 20:09:04.841212   62015 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0103 20:09:04.841400   62015 cache.go:107] acquiring lock: {Name:mk0ae9e390d74a93289bc4e45b5511dce57beeb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841216   62015 cache.go:107] acquiring lock: {Name:mkccb08ee6224be0e6786052f4bebc8d21ec8a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841614   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0103 20:09:04.841633   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0103 20:09:04.841675   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0103 20:09:04.841679   62015 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 497.325µs
	I0103 20:09:04.841672   62015 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 557.891µs
	I0103 20:09:04.841716   62015 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841696   62015 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 499.205µs
	I0103 20:09:04.841745   62015 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841706   62015 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841755   62015 cache.go:87] Successfully saved all images to host disk.
	I0103 20:09:05.605517   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:09:05.605574   62050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 20:09:05.605590   62050 cache.go:56] Caching tarball of preloaded images
	I0103 20:09:05.605669   62050 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 20:09:05.605681   62050 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:09:05.605787   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:09:05.605973   62050 start.go:365] acquiring machines lock for default-k8s-diff-port-018788: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:12.326805   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:15.398807   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:21.478760   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:24.550821   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:30.630841   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:33.702766   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:39.782732   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:42.854926   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:48.934815   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:52.006845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:58.086804   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:01.158903   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:07.238808   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:10.310897   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:16.390869   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:19.462833   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:25.542866   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:28.614753   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:34.694867   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:37.766876   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:43.846838   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:46.918843   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:52.998853   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:56.070822   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:02.150825   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:05.222884   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:11.302787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:14.374818   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:20.454810   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:23.526899   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:29.606842   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:32.678789   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:38.758787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:41.830855   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:47.910801   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:50.982868   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:57.062889   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:00.134834   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:06.214856   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:09.286845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:15.366787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:18.438756   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:24.518814   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:27.590887   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:30.594981   61676 start.go:369] acquired machines lock for "embed-certs-451331" in 3m56.986277612s
	I0103 20:12:30.595030   61676 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:30.595039   61676 fix.go:54] fixHost starting: 
	I0103 20:12:30.595434   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:30.595466   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:30.609917   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0103 20:12:30.610302   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:30.610819   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:12:30.610845   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:30.611166   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:30.611348   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:30.611486   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:12:30.613108   61676 fix.go:102] recreateIfNeeded on embed-certs-451331: state=Stopped err=<nil>
	I0103 20:12:30.613128   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	W0103 20:12:30.613291   61676 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:30.615194   61676 out.go:177] * Restarting existing kvm2 VM for "embed-certs-451331" ...
	I0103 20:12:30.592855   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:30.592889   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:12:30.594843   61400 machine.go:91] provisioned docker machine in 4m37.406324683s
	I0103 20:12:30.594886   61400 fix.go:56] fixHost completed within 4m37.42774841s
	I0103 20:12:30.594892   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 4m37.427764519s
	W0103 20:12:30.594913   61400 start.go:694] error starting host: provision: host is not running
	W0103 20:12:30.595005   61400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0103 20:12:30.595014   61400 start.go:709] Will try again in 5 seconds ...
	I0103 20:12:30.616365   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Start
	I0103 20:12:30.616513   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring networks are active...
	I0103 20:12:30.617380   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network default is active
	I0103 20:12:30.617718   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network mk-embed-certs-451331 is active
	I0103 20:12:30.618103   61676 main.go:141] libmachine: (embed-certs-451331) Getting domain xml...
	I0103 20:12:30.618735   61676 main.go:141] libmachine: (embed-certs-451331) Creating domain...
	I0103 20:12:31.839751   61676 main.go:141] libmachine: (embed-certs-451331) Waiting to get IP...
	I0103 20:12:31.840608   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:31.841035   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:31.841117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:31.841008   62575 retry.go:31] will retry after 303.323061ms: waiting for machine to come up
	I0103 20:12:32.146508   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.147005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.147037   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.146950   62575 retry.go:31] will retry after 240.92709ms: waiting for machine to come up
	I0103 20:12:32.389487   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.389931   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.389962   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.389887   62575 retry.go:31] will retry after 473.263026ms: waiting for machine to come up
	I0103 20:12:32.864624   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.865060   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.865082   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.865006   62575 retry.go:31] will retry after 473.373684ms: waiting for machine to come up
	I0103 20:12:33.339691   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.340156   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.340189   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.340098   62575 retry.go:31] will retry after 639.850669ms: waiting for machine to come up
	I0103 20:12:35.596669   61400 start.go:365] acquiring machines lock for old-k8s-version-927922: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:12:33.982104   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.982622   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.982655   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.982583   62575 retry.go:31] will retry after 589.282725ms: waiting for machine to come up
	I0103 20:12:34.573280   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:34.573692   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:34.573716   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:34.573639   62575 retry.go:31] will retry after 884.387817ms: waiting for machine to come up
	I0103 20:12:35.459819   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:35.460233   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:35.460287   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:35.460168   62575 retry.go:31] will retry after 1.326571684s: waiting for machine to come up
	I0103 20:12:36.788923   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:36.789429   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:36.789452   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:36.789395   62575 retry.go:31] will retry after 1.436230248s: waiting for machine to come up
	I0103 20:12:38.227994   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:38.228374   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:38.228397   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:38.228336   62575 retry.go:31] will retry after 2.127693351s: waiting for machine to come up
	I0103 20:12:40.358485   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:40.358968   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:40.358998   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:40.358912   62575 retry.go:31] will retry after 1.816116886s: waiting for machine to come up
	I0103 20:12:42.177782   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:42.178359   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:42.178390   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:42.178296   62575 retry.go:31] will retry after 3.199797073s: waiting for machine to come up
	I0103 20:12:45.381712   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:45.382053   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:45.382075   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:45.381991   62575 retry.go:31] will retry after 3.573315393s: waiting for machine to come up
	I0103 20:12:50.159164   62015 start.go:369] acquired machines lock for "no-preload-749210" in 3m45.318070652s
	I0103 20:12:50.159226   62015 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:50.159235   62015 fix.go:54] fixHost starting: 
	I0103 20:12:50.159649   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:50.159688   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:50.176573   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0103 20:12:50.176998   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:50.177504   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:12:50.177529   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:50.177925   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:50.178125   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:12:50.178297   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:12:50.179850   62015 fix.go:102] recreateIfNeeded on no-preload-749210: state=Stopped err=<nil>
	I0103 20:12:50.179873   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	W0103 20:12:50.180066   62015 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:50.182450   62015 out.go:177] * Restarting existing kvm2 VM for "no-preload-749210" ...
	I0103 20:12:48.959159   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.959637   61676 main.go:141] libmachine: (embed-certs-451331) Found IP for machine: 192.168.50.197
	I0103 20:12:48.959655   61676 main.go:141] libmachine: (embed-certs-451331) Reserving static IP address...
	I0103 20:12:48.959666   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has current primary IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.960051   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.960073   61676 main.go:141] libmachine: (embed-certs-451331) DBG | skip adding static IP to network mk-embed-certs-451331 - found existing host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"}
	I0103 20:12:48.960086   61676 main.go:141] libmachine: (embed-certs-451331) Reserved static IP address: 192.168.50.197
	I0103 20:12:48.960101   61676 main.go:141] libmachine: (embed-certs-451331) Waiting for SSH to be available...
	I0103 20:12:48.960117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Getting to WaitForSSH function...
	I0103 20:12:48.962160   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962443   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.962478   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962611   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH client type: external
	I0103 20:12:48.962631   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa (-rw-------)
	I0103 20:12:48.962661   61676 main.go:141] libmachine: (embed-certs-451331) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:12:48.962681   61676 main.go:141] libmachine: (embed-certs-451331) DBG | About to run SSH command:
	I0103 20:12:48.962718   61676 main.go:141] libmachine: (embed-certs-451331) DBG | exit 0
	I0103 20:12:49.058790   61676 main.go:141] libmachine: (embed-certs-451331) DBG | SSH cmd err, output: <nil>: 
	I0103 20:12:49.059176   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetConfigRaw
	I0103 20:12:49.059838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.062025   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062407   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.062440   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062697   61676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/config.json ...
	I0103 20:12:49.062878   61676 machine.go:88] provisioning docker machine ...
	I0103 20:12:49.062894   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.063097   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063258   61676 buildroot.go:166] provisioning hostname "embed-certs-451331"
	I0103 20:12:49.063278   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063423   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.065735   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066121   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.066161   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066328   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.066507   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066695   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066860   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.067065   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.067455   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.067469   61676 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-451331 && echo "embed-certs-451331" | sudo tee /etc/hostname
	I0103 20:12:49.210431   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-451331
	
	I0103 20:12:49.210465   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.213162   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213503   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.213573   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213682   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.213911   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214094   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214289   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.214449   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.214837   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.214856   61676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-451331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-451331/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-451331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:12:49.350098   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:49.350134   61676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:12:49.350158   61676 buildroot.go:174] setting up certificates
	I0103 20:12:49.350172   61676 provision.go:83] configureAuth start
	I0103 20:12:49.350188   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.350497   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.352947   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353356   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.353387   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353448   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.355701   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.356033   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356183   61676 provision.go:138] copyHostCerts
	I0103 20:12:49.356241   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:12:49.356254   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:12:49.356322   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:12:49.356413   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:12:49.356421   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:12:49.356446   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:12:49.356506   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:12:49.356513   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:12:49.356535   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:12:49.356587   61676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.embed-certs-451331 san=[192.168.50.197 192.168.50.197 localhost 127.0.0.1 minikube embed-certs-451331]
	I0103 20:12:49.413721   61676 provision.go:172] copyRemoteCerts
	I0103 20:12:49.413781   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:12:49.413804   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.416658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417143   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.417170   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417420   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.417617   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.417814   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.417977   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.510884   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:12:49.533465   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0103 20:12:49.554895   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:12:49.576069   61676 provision.go:86] duration metric: configureAuth took 225.882364ms
	I0103 20:12:49.576094   61676 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:12:49.576310   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:12:49.576387   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.579119   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579413   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.579461   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.579780   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.579968   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.580121   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.580271   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.580591   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.580615   61676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:12:49.883159   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:12:49.883188   61676 machine.go:91] provisioned docker machine in 820.299871ms
	I0103 20:12:49.883199   61676 start.go:300] post-start starting for "embed-certs-451331" (driver="kvm2")
	I0103 20:12:49.883212   61676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:12:49.883239   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.883565   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:12:49.883599   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.886365   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.886695   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886878   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.887091   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.887293   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.887468   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.985529   61676 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:12:49.989732   61676 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:12:49.989758   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:12:49.989820   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:12:49.989891   61676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:12:49.989981   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:12:49.999882   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:50.022936   61676 start.go:303] post-start completed in 139.710189ms
	I0103 20:12:50.022966   61676 fix.go:56] fixHost completed within 19.427926379s
	I0103 20:12:50.023002   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.025667   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.025940   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.025973   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.026212   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.026424   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.027074   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:50.027381   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:50.027393   61676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:12:50.159031   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312770.110466062
	
	I0103 20:12:50.159053   61676 fix.go:206] guest clock: 1704312770.110466062
	I0103 20:12:50.159061   61676 fix.go:219] Guest: 2024-01-03 20:12:50.110466062 +0000 UTC Remote: 2024-01-03 20:12:50.022969488 +0000 UTC m=+256.568741537 (delta=87.496574ms)
	I0103 20:12:50.159083   61676 fix.go:190] guest clock delta is within tolerance: 87.496574ms
	I0103 20:12:50.159089   61676 start.go:83] releasing machines lock for "embed-certs-451331", held for 19.564082089s
	I0103 20:12:50.159117   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.159421   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:50.162216   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162550   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.162577   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162762   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163248   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163433   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163532   61676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:12:50.163583   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.163644   61676 ssh_runner.go:195] Run: cat /version.json
	I0103 20:12:50.163671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.166588   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166753   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166957   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.166987   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167192   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167329   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.167358   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167362   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167500   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.167590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167684   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.167761   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167905   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.168096   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.298482   61676 ssh_runner.go:195] Run: systemctl --version
	I0103 20:12:50.304252   61676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:12:50.442709   61676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:12:50.448879   61676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:12:50.448959   61676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:12:50.467183   61676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:12:50.467203   61676 start.go:475] detecting cgroup driver to use...
	I0103 20:12:50.467269   61676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:12:50.482438   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:12:50.493931   61676 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:12:50.493997   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:12:50.506860   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:12:50.519279   61676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:12:50.627391   61676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:12:50.748160   61676 docker.go:219] disabling docker service ...
	I0103 20:12:50.748220   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:12:50.760970   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:12:50.772252   61676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:12:50.889707   61676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:12:51.003794   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:12:51.016226   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:12:51.032543   61676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:12:51.032600   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.042477   61676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:12:51.042559   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.053103   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.063469   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.073912   61676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:12:51.083314   61676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:12:51.092920   61676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:12:51.092969   61676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:12:51.106690   61676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:12:51.115815   61676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:12:51.230139   61676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:12:51.413184   61676 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:12:51.413315   61676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:12:51.417926   61676 start.go:543] Will wait 60s for crictl version
	I0103 20:12:51.417988   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:12:51.421507   61676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:12:51.465370   61676 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:12:51.465453   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.519590   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.582633   61676 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:12:51.583888   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:51.587068   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587442   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:51.587486   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587724   61676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0103 20:12:51.591798   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:51.602798   61676 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:12:51.602871   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:51.641736   61676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:12:51.641799   61676 ssh_runner.go:195] Run: which lz4
	I0103 20:12:51.645386   61676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:12:51.649168   61676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:12:51.649196   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:12:53.428537   61676 crio.go:444] Took 1.783185 seconds to copy over tarball
	I0103 20:12:53.428601   61676 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:12:50.183891   62015 main.go:141] libmachine: (no-preload-749210) Calling .Start
	I0103 20:12:50.184083   62015 main.go:141] libmachine: (no-preload-749210) Ensuring networks are active...
	I0103 20:12:50.184749   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network default is active
	I0103 20:12:50.185084   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network mk-no-preload-749210 is active
	I0103 20:12:50.185435   62015 main.go:141] libmachine: (no-preload-749210) Getting domain xml...
	I0103 20:12:50.186067   62015 main.go:141] libmachine: (no-preload-749210) Creating domain...
	I0103 20:12:51.468267   62015 main.go:141] libmachine: (no-preload-749210) Waiting to get IP...
	I0103 20:12:51.469108   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.469584   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.469664   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.469570   62702 retry.go:31] will retry after 254.191618ms: waiting for machine to come up
	I0103 20:12:51.724958   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.725657   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.725683   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.725609   62702 retry.go:31] will retry after 279.489548ms: waiting for machine to come up
	I0103 20:12:52.007176   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.007682   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.007713   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.007628   62702 retry.go:31] will retry after 422.96552ms: waiting for machine to come up
	I0103 20:12:52.432345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.432873   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.432912   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.432844   62702 retry.go:31] will retry after 561.295375ms: waiting for machine to come up
	I0103 20:12:52.995438   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.995929   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.995963   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.995878   62702 retry.go:31] will retry after 547.962782ms: waiting for machine to come up
	I0103 20:12:53.545924   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:53.546473   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:53.546558   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:53.546453   62702 retry.go:31] will retry after 927.631327ms: waiting for machine to come up
	I0103 20:12:54.475549   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:54.476000   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:54.476046   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:54.475945   62702 retry.go:31] will retry after 880.192703ms: waiting for machine to come up
	I0103 20:12:56.224357   61676 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.795734066s)
	I0103 20:12:56.224386   61676 crio.go:451] Took 2.795820 seconds to extract the tarball
	I0103 20:12:56.224406   61676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:12:56.266955   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:56.318766   61676 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:12:56.318789   61676 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:12:56.318871   61676 ssh_runner.go:195] Run: crio config
	I0103 20:12:56.378376   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:12:56.378401   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:12:56.378423   61676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:12:56.378451   61676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.197 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-451331 NodeName:embed-certs-451331 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:12:56.378619   61676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-451331"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:12:56.378714   61676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-451331 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:12:56.378777   61676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:12:56.387967   61676 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:12:56.388037   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:12:56.396000   61676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0103 20:12:56.411880   61676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:12:56.427567   61676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0103 20:12:56.443342   61676 ssh_runner.go:195] Run: grep 192.168.50.197	control-plane.minikube.internal$ /etc/hosts
	I0103 20:12:56.446991   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:56.458659   61676 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331 for IP: 192.168.50.197
	I0103 20:12:56.458696   61676 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:12:56.458844   61676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:12:56.458904   61676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:12:56.459010   61676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/client.key
	I0103 20:12:56.459092   61676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key.d719e12a
	I0103 20:12:56.459159   61676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key
	I0103 20:12:56.459299   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:12:56.459341   61676 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:12:56.459358   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:12:56.459400   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:12:56.459434   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:12:56.459466   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:12:56.459522   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:56.460408   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:12:56.481997   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:12:56.504016   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:12:56.526477   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:12:56.548471   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:12:56.570763   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:12:56.592910   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:12:56.617765   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:12:56.646025   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:12:56.668629   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:12:56.690927   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:12:56.712067   61676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:12:56.727773   61676 ssh_runner.go:195] Run: openssl version
	I0103 20:12:56.733000   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:12:56.742921   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747499   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747562   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.752732   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:12:56.762510   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:12:56.772401   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777123   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777180   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.782490   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:12:56.793745   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:12:56.805156   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809897   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809954   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.815432   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:12:56.826498   61676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:12:56.831012   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:12:56.837150   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:12:56.843256   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:12:56.849182   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:12:56.854882   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:12:56.862018   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:12:56.867863   61676 kubeadm.go:404] StartCluster: {Name:embed-certs-451331 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:12:56.867982   61676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:12:56.868029   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:12:56.909417   61676 cri.go:89] found id: ""
	I0103 20:12:56.909523   61676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:12:56.919487   61676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:12:56.919515   61676 kubeadm.go:636] restartCluster start
	I0103 20:12:56.919568   61676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:12:56.929137   61676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:56.930326   61676 kubeconfig.go:92] found "embed-certs-451331" server: "https://192.168.50.197:8443"
	I0103 20:12:56.932682   61676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:12:56.941846   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:56.941909   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:56.953616   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.442188   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.442281   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.458303   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.942905   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.942988   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.955860   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:58.442326   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.442420   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.454294   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:55.357897   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:55.358462   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:55.358492   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:55.358429   62702 retry.go:31] will retry after 1.158958207s: waiting for machine to come up
	I0103 20:12:56.518837   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:56.519260   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:56.519306   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:56.519224   62702 retry.go:31] will retry after 1.620553071s: waiting for machine to come up
	I0103 20:12:58.141980   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:58.142505   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:58.142549   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:58.142454   62702 retry.go:31] will retry after 1.525068593s: waiting for machine to come up
	I0103 20:12:59.670380   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:59.670880   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:59.670909   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:59.670827   62702 retry.go:31] will retry after 1.772431181s: waiting for machine to come up
	I0103 20:12:58.942887   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.942975   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.956781   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.442313   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.442402   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.455837   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.942355   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.942439   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.954326   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.441870   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.441960   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.454004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.941882   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.941995   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.958004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.442573   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.442664   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.458604   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.942062   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.942170   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.958396   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.442928   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.443027   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.456612   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.941943   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.942056   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.953939   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:03.442552   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.442633   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.454840   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.445221   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:01.445608   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:01.445647   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:01.445565   62702 retry.go:31] will retry after 2.830747633s: waiting for machine to come up
	I0103 20:13:04.279514   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:04.279996   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:04.280020   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:04.279963   62702 retry.go:31] will retry after 4.03880385s: waiting for machine to come up
	I0103 20:13:03.942687   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.942774   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.954714   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.442265   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.442357   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.454216   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.942877   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.942952   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.954944   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.442467   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.442596   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.454305   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.942383   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.942468   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.954074   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.442723   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.442811   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.454629   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.942200   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.942283   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.953799   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.953829   61676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:06.953836   61676 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:06.953845   61676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:06.953904   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:06.989109   61676 cri.go:89] found id: ""
	I0103 20:13:06.989214   61676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:07.004822   61676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:07.014393   61676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:07.014454   61676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023669   61676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023691   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.139277   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.626388   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.814648   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.901750   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.962623   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:07.962710   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.463820   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.322801   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323160   62015 main.go:141] libmachine: (no-preload-749210) Found IP for machine: 192.168.61.245
	I0103 20:13:08.323203   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has current primary IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323222   62015 main.go:141] libmachine: (no-preload-749210) Reserving static IP address...
	I0103 20:13:08.323600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.323632   62015 main.go:141] libmachine: (no-preload-749210) Reserved static IP address: 192.168.61.245
	I0103 20:13:08.323664   62015 main.go:141] libmachine: (no-preload-749210) DBG | skip adding static IP to network mk-no-preload-749210 - found existing host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"}
	I0103 20:13:08.323684   62015 main.go:141] libmachine: (no-preload-749210) DBG | Getting to WaitForSSH function...
	I0103 20:13:08.323698   62015 main.go:141] libmachine: (no-preload-749210) Waiting for SSH to be available...
	I0103 20:13:08.325529   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325831   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.325863   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325949   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH client type: external
	I0103 20:13:08.325977   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa (-rw-------)
	I0103 20:13:08.326013   62015 main.go:141] libmachine: (no-preload-749210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:08.326030   62015 main.go:141] libmachine: (no-preload-749210) DBG | About to run SSH command:
	I0103 20:13:08.326053   62015 main.go:141] libmachine: (no-preload-749210) DBG | exit 0
	I0103 20:13:08.418368   62015 main.go:141] libmachine: (no-preload-749210) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:08.418718   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetConfigRaw
	I0103 20:13:08.419464   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.421838   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422172   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.422199   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422460   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:13:08.422680   62015 machine.go:88] provisioning docker machine ...
	I0103 20:13:08.422702   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:08.422883   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423027   62015 buildroot.go:166] provisioning hostname "no-preload-749210"
	I0103 20:13:08.423047   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423153   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.425105   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425377   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.425408   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425583   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.425734   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425869   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425987   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.426160   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.426488   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.426501   62015 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-749210 && echo "no-preload-749210" | sudo tee /etc/hostname
	I0103 20:13:08.579862   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-749210
	
	I0103 20:13:08.579892   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.583166   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.583635   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583828   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.584039   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584225   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584391   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.584593   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.584928   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.584954   62015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-749210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-749210/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-749210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:08.729661   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:08.729697   62015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:08.729738   62015 buildroot.go:174] setting up certificates
	I0103 20:13:08.729759   62015 provision.go:83] configureAuth start
	I0103 20:13:08.729776   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.730101   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.733282   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733694   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.733728   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733868   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.736223   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736557   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.736589   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736763   62015 provision.go:138] copyHostCerts
	I0103 20:13:08.736830   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:08.736847   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:08.736913   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:08.737035   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:08.737047   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:08.737077   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:08.737177   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:08.737188   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:08.737218   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:08.737295   62015 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.no-preload-749210 san=[192.168.61.245 192.168.61.245 localhost 127.0.0.1 minikube no-preload-749210]
	I0103 20:13:09.018604   62015 provision.go:172] copyRemoteCerts
	I0103 20:13:09.018662   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:09.018684   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.021339   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021729   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.021777   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.022068   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.022220   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.022405   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.120023   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0103 20:13:09.143242   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:09.166206   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:09.192425   62015 provision.go:86] duration metric: configureAuth took 462.649611ms
	I0103 20:13:09.192457   62015 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:09.192678   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:09.192770   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.195193   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195594   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.195633   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.196100   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196272   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196437   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.196637   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.197028   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.197048   62015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:09.528890   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:09.528915   62015 machine.go:91] provisioned docker machine in 1.106221183s
	I0103 20:13:09.528924   62015 start.go:300] post-start starting for "no-preload-749210" (driver="kvm2")
	I0103 20:13:09.528949   62015 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:09.528966   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.529337   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:09.529372   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.532679   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533032   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.533063   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533262   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.533490   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.533675   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.533841   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.632949   62015 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:09.638382   62015 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:09.638421   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:09.638502   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:09.638617   62015 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:09.638744   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:09.650407   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:09.672528   62015 start.go:303] post-start completed in 143.577643ms
	I0103 20:13:09.672560   62015 fix.go:56] fixHost completed within 19.513324819s
	I0103 20:13:09.672585   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.675037   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675398   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.675430   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675587   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.675811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.675963   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.676112   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.676294   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.676674   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.676690   62015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:09.811720   62050 start.go:369] acquired machines lock for "default-k8s-diff-port-018788" in 4m4.205717121s
	I0103 20:13:09.811786   62050 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:09.811797   62050 fix.go:54] fixHost starting: 
	I0103 20:13:09.812213   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:09.812257   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:09.831972   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0103 20:13:09.832420   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:09.832973   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:13:09.833004   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:09.833345   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:09.833505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:09.833637   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:13:09.835476   62050 fix.go:102] recreateIfNeeded on default-k8s-diff-port-018788: state=Stopped err=<nil>
	I0103 20:13:09.835520   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	W0103 20:13:09.835689   62050 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:09.837499   62050 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018788" ...
	I0103 20:13:09.838938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Start
	I0103 20:13:09.839117   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring networks are active...
	I0103 20:13:09.839888   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network default is active
	I0103 20:13:09.840347   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network mk-default-k8s-diff-port-018788 is active
	I0103 20:13:09.840765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Getting domain xml...
	I0103 20:13:09.841599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Creating domain...
	I0103 20:13:09.811571   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312789.764323206
	
	I0103 20:13:09.811601   62015 fix.go:206] guest clock: 1704312789.764323206
	I0103 20:13:09.811611   62015 fix.go:219] Guest: 2024-01-03 20:13:09.764323206 +0000 UTC Remote: 2024-01-03 20:13:09.672564299 +0000 UTC m=+244.986151230 (delta=91.758907ms)
	I0103 20:13:09.811636   62015 fix.go:190] guest clock delta is within tolerance: 91.758907ms
	I0103 20:13:09.811642   62015 start.go:83] releasing machines lock for "no-preload-749210", held for 19.652439302s
	I0103 20:13:09.811678   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.811949   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:09.815012   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815391   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.815429   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815641   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816177   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816363   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816471   62015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:09.816509   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.816620   62015 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:09.816646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.819652   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.819909   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820058   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820088   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820319   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820377   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820581   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820753   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.820822   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820910   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.821007   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.821131   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.949119   62015 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:09.956247   62015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:10.116715   62015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:10.122512   62015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:10.122640   62015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:10.142239   62015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:10.142265   62015 start.go:475] detecting cgroup driver to use...
	I0103 20:13:10.142336   62015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:10.159473   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:10.175492   62015 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:10.175555   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:10.191974   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:10.208639   62015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:10.343228   62015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:10.457642   62015 docker.go:219] disabling docker service ...
	I0103 20:13:10.457720   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:10.475117   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:10.491265   62015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:10.613064   62015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:10.741969   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:10.755923   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:10.775483   62015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:10.775550   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.785489   62015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:10.785557   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.795303   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.804763   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.814559   62015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:10.824431   62015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:10.833193   62015 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:10.833273   62015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:10.850446   62015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:10.861775   62015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:11.021577   62015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:11.217675   62015 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:11.217748   62015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:11.222475   62015 start.go:543] Will wait 60s for crictl version
	I0103 20:13:11.222552   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.226128   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:11.266681   62015 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:11.266775   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.313142   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.358396   62015 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0103 20:13:08.963472   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.462836   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.963771   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.991718   61676 api_server.go:72] duration metric: took 2.029094062s to wait for apiserver process to appear ...
	I0103 20:13:09.991748   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:09.991769   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:09.992264   61676 api_server.go:269] stopped: https://192.168.50.197:8443/healthz: Get "https://192.168.50.197:8443/healthz": dial tcp 192.168.50.197:8443: connect: connection refused
	I0103 20:13:10.491803   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:11.359808   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:11.363074   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363434   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:11.363465   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363695   62015 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:11.367689   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:11.378693   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:13:11.378746   62015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:11.416544   62015 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0103 20:13:11.416570   62015 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:11.416642   62015 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.416698   62015 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.416724   62015 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.416699   62015 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.416929   62015 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0103 20:13:11.416671   62015 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.417054   62015 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.417093   62015 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.418600   62015 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.418621   62015 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.418630   62015 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0103 20:13:11.418646   62015 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.418661   62015 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.418675   62015 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.418685   62015 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.418697   62015 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.635223   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.662007   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.668522   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.671471   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.672069   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.685216   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0103 20:13:11.687462   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.716775   62015 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0103 20:13:11.716825   62015 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.716882   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.762358   62015 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0103 20:13:11.762394   62015 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.762463   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846225   62015 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0103 20:13:11.846268   62015 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.846317   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846432   62015 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0103 20:13:11.846473   62015 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.846529   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846515   62015 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0103 20:13:11.846655   62015 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.846711   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956577   62015 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0103 20:13:11.956659   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.956689   62015 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.956746   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.956760   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956782   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.956820   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.956873   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:12.064715   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:12.064764   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0103 20:13:12.064720   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.064856   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:12.064903   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.068647   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068685   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068752   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068767   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.068771   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068841   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.077600   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0103 20:13:12.077622   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077682   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077798   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0103 20:13:12.109729   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109778   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109838   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109927   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0103 20:13:12.110020   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:12.237011   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279507   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.201800359s)
	I0103 20:13:14.279592   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0103 20:13:14.279606   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.169553787s)
	I0103 20:13:14.279641   62015 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279646   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0103 20:13:14.279645   62015 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.042604307s)
	I0103 20:13:14.279725   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279726   62015 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0103 20:13:14.279760   62015 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279802   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:14.285860   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.246503   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting to get IP...
	I0103 20:13:11.247669   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248203   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248301   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.248165   62835 retry.go:31] will retry after 292.358185ms: waiting for machine to come up
	I0103 20:13:11.541836   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542224   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.542168   62835 retry.go:31] will retry after 370.634511ms: waiting for machine to come up
	I0103 20:13:11.914890   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915372   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915403   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.915330   62835 retry.go:31] will retry after 304.80922ms: waiting for machine to come up
	I0103 20:13:12.221826   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222289   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.222232   62835 retry.go:31] will retry after 534.177843ms: waiting for machine to come up
	I0103 20:13:12.757904   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758422   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.758334   62835 retry.go:31] will retry after 749.166369ms: waiting for machine to come up
	I0103 20:13:13.509343   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:13.509854   62835 retry.go:31] will retry after 716.215015ms: waiting for machine to come up
	I0103 20:13:14.227886   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228388   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228414   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:14.228338   62835 retry.go:31] will retry after 1.095458606s: waiting for machine to come up
	I0103 20:13:15.324880   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325299   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325332   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:15.325250   62835 retry.go:31] will retry after 1.266878415s: waiting for machine to come up
	I0103 20:13:14.427035   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.427077   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.427119   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.462068   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.462115   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.492283   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.500354   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.500391   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:14.991910   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.997522   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.997550   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.492157   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:15.500340   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:15.500377   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.992158   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:16.002940   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:13:16.020171   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:16.020205   61676 api_server.go:131] duration metric: took 6.028448633s to wait for apiserver health ...
	I0103 20:13:16.020216   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:13:16.020226   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:16.022596   61676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:16.024514   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:16.064582   61676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:16.113727   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:16.124984   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:16.125031   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:16.125044   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:16.125061   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:16.125072   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:16.125086   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:16.125097   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:16.125111   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:16.125125   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:16.125140   61676 system_pods.go:74] duration metric: took 11.390906ms to wait for pod list to return data ...
	I0103 20:13:16.125152   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:16.133036   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:16.133072   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:16.133086   61676 node_conditions.go:105] duration metric: took 7.928329ms to run NodePressure ...
	I0103 20:13:16.133109   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:16.519151   61676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530359   61676 kubeadm.go:787] kubelet initialised
	I0103 20:13:16.530380   61676 kubeadm.go:788] duration metric: took 11.203465ms waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530388   61676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:16.540797   61676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.550417   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550457   61676 pod_ready.go:81] duration metric: took 9.627239ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.550475   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550486   61676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.557664   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557693   61676 pod_ready.go:81] duration metric: took 7.191907ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.557705   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557721   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.566973   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567007   61676 pod_ready.go:81] duration metric: took 9.268451ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.567019   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567027   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.587777   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587811   61676 pod_ready.go:81] duration metric: took 20.769874ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.587825   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587832   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.923613   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923643   61676 pod_ready.go:81] duration metric: took 335.80096ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.923655   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923663   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.323875   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323911   61676 pod_ready.go:81] duration metric: took 400.238515ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.323922   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323931   61676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.724694   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724727   61676 pod_ready.go:81] duration metric: took 400.785148ms waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.724741   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724750   61676 pod_ready.go:38] duration metric: took 1.194352759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:17.724774   61676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:17.754724   61676 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:17.754762   61676 kubeadm.go:640] restartCluster took 20.835238159s
	I0103 20:13:17.754774   61676 kubeadm.go:406] StartCluster complete in 20.886921594s
	I0103 20:13:17.754794   61676 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.754875   61676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:17.757638   61676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.759852   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:13:17.759948   61676 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:17.760022   61676 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-451331"
	I0103 20:13:17.760049   61676 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-451331"
	W0103 20:13:17.760060   61676 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:17.760105   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.760154   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:17.760202   61676 addons.go:69] Setting default-storageclass=true in profile "embed-certs-451331"
	I0103 20:13:17.760227   61676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-451331"
	I0103 20:13:17.760525   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760553   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760595   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760619   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760814   61676 addons.go:69] Setting metrics-server=true in profile "embed-certs-451331"
	I0103 20:13:17.760869   61676 addons.go:237] Setting addon metrics-server=true in "embed-certs-451331"
	W0103 20:13:17.760887   61676 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:17.760949   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.761311   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.761367   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.778350   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0103 20:13:17.778603   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0103 20:13:17.778840   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.778947   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.779349   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779369   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779496   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779506   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779894   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.779936   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.780390   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0103 20:13:17.780507   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780528   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.780892   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780933   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.781532   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.782012   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.782030   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.782393   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.782580   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.786209   61676 addons.go:237] Setting addon default-storageclass=true in "embed-certs-451331"
	W0103 20:13:17.786231   61676 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:17.786264   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.786730   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.786761   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.796538   61676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-451331" context rescaled to 1 replicas
	I0103 20:13:17.796579   61676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:17.798616   61676 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:17.800702   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:17.799744   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0103 20:13:17.801004   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0103 20:13:17.801125   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.801622   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.801643   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.801967   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.802456   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.804195   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.804537   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.804683   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.804700   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.806577   61676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:17.805108   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.807681   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0103 20:13:17.808340   61676 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:17.808354   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:17.808371   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.808513   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.809005   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.809510   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.809529   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.809978   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.810778   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.810822   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.812250   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812607   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.812629   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812892   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.812970   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.813069   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.815321   61676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:17.813342   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.817289   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:17.817308   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:17.817336   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.817473   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.820418   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.820892   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.820920   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.821168   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.821350   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.821468   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.821597   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.829857   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0103 20:13:17.830343   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.830847   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.830869   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.831278   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.831432   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.833351   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.833678   61676 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:17.833695   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:17.833714   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.837454   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837708   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.837730   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837975   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.838211   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.838384   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.838534   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:18.036885   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:18.097340   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:18.099953   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:18.099982   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:18.242823   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:18.242847   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:18.309930   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:18.309959   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:18.321992   61676 node_ready.go:35] waiting up to 6m0s for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:18.322077   61676 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:18.366727   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:16.441666   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.161911946s)
	I0103 20:13:16.441698   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0103 20:13:16.441720   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441740   62015 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.155838517s)
	I0103 20:13:16.441767   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441855   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0103 20:13:16.441964   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:20.073248   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.975867864s)
	I0103 20:13:20.073318   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073383   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073265   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.03634078s)
	I0103 20:13:20.073419   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.706641739s)
	I0103 20:13:20.073466   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073490   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073489   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073553   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073744   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073759   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073775   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073786   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073878   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073905   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073935   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073938   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073980   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073992   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074016   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074036   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074073   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.074086   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074309   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074369   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074428   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074476   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074454   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074506   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074558   61676 addons.go:473] Verifying addon metrics-server=true in "embed-certs-451331"
	I0103 20:13:20.077560   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.077613   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.077653   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.088401   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.088441   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.088845   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.090413   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.090439   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.092641   61676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0103 20:13:16.593786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594320   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594352   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:16.594229   62835 retry.go:31] will retry after 1.232411416s: waiting for machine to come up
	I0103 20:13:17.828286   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832049   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832078   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:17.828787   62835 retry.go:31] will retry after 2.020753248s: waiting for machine to come up
	I0103 20:13:19.851119   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851683   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:19.851595   62835 retry.go:31] will retry after 2.720330873s: waiting for machine to come up
	I0103 20:13:20.094375   61676 addons.go:508] enable addons completed in 2.334425533s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0103 20:13:20.325950   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:22.327709   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:19.820972   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.379182556s)
	I0103 20:13:19.821009   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0103 20:13:19.821032   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.820976   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.378974193s)
	I0103 20:13:19.821081   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.821092   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0103 20:13:21.294764   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.47365805s)
	I0103 20:13:21.294796   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0103 20:13:21.294826   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:21.294879   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:24.067996   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.773083678s)
	I0103 20:13:24.068031   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0103 20:13:24.068071   62015 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:24.068131   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:22.573532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573959   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:22.573882   62835 retry.go:31] will retry after 2.869192362s: waiting for machine to come up
	I0103 20:13:25.444272   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444774   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444801   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:25.444710   62835 retry.go:31] will retry after 3.61848561s: waiting for machine to come up
	I0103 20:13:24.327795   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:24.831015   61676 node_ready.go:49] node "embed-certs-451331" has status "Ready":"True"
	I0103 20:13:24.831037   61676 node_ready.go:38] duration metric: took 6.509012992s waiting for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:24.831046   61676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:24.838244   61676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345945   61676 pod_ready.go:92] pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.345980   61676 pod_ready.go:81] duration metric: took 507.709108ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345991   61676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352763   61676 pod_ready.go:92] pod "etcd-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.352798   61676 pod_ready.go:81] duration metric: took 6.794419ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352812   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359491   61676 pod_ready.go:92] pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.359533   61676 pod_ready.go:81] duration metric: took 6.711829ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359547   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867866   61676 pod_ready.go:92] pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.867898   61676 pod_ready.go:81] duration metric: took 508.341809ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867912   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026106   61676 pod_ready.go:92] pod "kube-proxy-fsnb9" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.026140   61676 pod_ready.go:81] duration metric: took 158.216243ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026153   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428480   61676 pod_ready.go:92] pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.428506   61676 pod_ready.go:81] duration metric: took 402.345241ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428525   61676 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:28.438138   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:27.768745   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.700590535s)
	I0103 20:13:27.768774   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0103 20:13:27.768797   62015 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:27.768833   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:28.718165   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0103 20:13:28.718231   62015 cache_images.go:123] Successfully loaded all cached images
	I0103 20:13:28.718239   62015 cache_images.go:92] LoadImages completed in 17.301651166s
	I0103 20:13:28.718342   62015 ssh_runner.go:195] Run: crio config
	I0103 20:13:28.770786   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:28.770813   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:28.770838   62015 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:28.770862   62015 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.245 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-749210 NodeName:no-preload-749210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:28.771031   62015 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-749210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:28.771103   62015 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-749210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:13:28.771163   62015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 20:13:28.780756   62015 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:28.780834   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:28.789160   62015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0103 20:13:28.804638   62015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 20:13:28.820113   62015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0103 20:13:28.835707   62015 ssh_runner.go:195] Run: grep 192.168.61.245	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:28.839456   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:28.850530   62015 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210 for IP: 192.168.61.245
	I0103 20:13:28.850581   62015 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:28.850730   62015 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:28.850770   62015 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:28.850833   62015 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.key
	I0103 20:13:28.850886   62015 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key.5dd805e0
	I0103 20:13:28.850922   62015 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key
	I0103 20:13:28.851054   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:28.851081   62015 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:28.851093   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:28.851117   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:28.851139   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:28.851168   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:28.851210   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:28.851832   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:28.874236   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:13:28.896624   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:28.919016   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:13:28.941159   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:28.963311   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:28.985568   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:29.007709   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:29.030188   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:29.052316   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:29.076761   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:29.101462   62015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:29.118605   62015 ssh_runner.go:195] Run: openssl version
	I0103 20:13:29.124144   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:29.133148   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137750   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137809   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.143321   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:29.152302   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:29.161551   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166396   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166457   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.173179   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:29.184167   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:29.194158   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198763   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198836   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.204516   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:29.214529   62015 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:29.218834   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:29.225036   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:29.231166   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:29.237200   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:29.243158   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:29.249694   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:29.255582   62015 kubeadm.go:404] StartCluster: {Name:no-preload-749210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:29.255672   62015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:29.255758   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:29.299249   62015 cri.go:89] found id: ""
	I0103 20:13:29.299346   62015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:29.311210   62015 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:29.311227   62015 kubeadm.go:636] restartCluster start
	I0103 20:13:29.311271   62015 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:29.320430   62015 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:29.321471   62015 kubeconfig.go:92] found "no-preload-749210" server: "https://192.168.61.245:8443"
	I0103 20:13:29.324643   62015 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:29.333237   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.333300   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.345156   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.219284   61400 start.go:369] acquired machines lock for "old-k8s-version-927922" in 54.622555379s
	I0103 20:13:30.219352   61400 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:30.219364   61400 fix.go:54] fixHost starting: 
	I0103 20:13:30.219739   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:30.219770   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:30.235529   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0103 20:13:30.235926   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:30.236537   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:13:30.236562   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:30.236911   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:30.237121   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:30.237293   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:13:30.238979   61400 fix.go:102] recreateIfNeeded on old-k8s-version-927922: state=Stopped err=<nil>
	I0103 20:13:30.239006   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	W0103 20:13:30.239155   61400 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:30.241210   61400 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-927922" ...
	I0103 20:13:29.067586   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068030   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Found IP for machine: 192.168.39.139
	I0103 20:13:29.068048   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserving static IP address...
	I0103 20:13:29.068090   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has current primary IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.068532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | skip adding static IP to network mk-default-k8s-diff-port-018788 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"}
	I0103 20:13:29.068549   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserved static IP address: 192.168.39.139
	I0103 20:13:29.068571   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for SSH to be available...
	I0103 20:13:29.068608   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Getting to WaitForSSH function...
	I0103 20:13:29.071139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071587   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.071620   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071779   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH client type: external
	I0103 20:13:29.071810   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa (-rw-------)
	I0103 20:13:29.071858   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:29.071879   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | About to run SSH command:
	I0103 20:13:29.071896   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | exit 0
	I0103 20:13:29.166962   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:29.167365   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetConfigRaw
	I0103 20:13:29.167989   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.170671   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171052   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.171092   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171325   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:13:29.171564   62050 machine.go:88] provisioning docker machine ...
	I0103 20:13:29.171589   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.171866   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172058   62050 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018788"
	I0103 20:13:29.172084   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172253   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.175261   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175626   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.175660   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175749   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.175943   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176219   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176392   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.176611   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.177083   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.177105   62050 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018788 && echo "default-k8s-diff-port-018788" | sudo tee /etc/hostname
	I0103 20:13:29.304876   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018788
	
	I0103 20:13:29.304915   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.307645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308124   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.308190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.308619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308799   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308997   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.309177   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.309652   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.309682   62050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018788/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:29.431479   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:29.431517   62050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:29.431555   62050 buildroot.go:174] setting up certificates
	I0103 20:13:29.431569   62050 provision.go:83] configureAuth start
	I0103 20:13:29.431582   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.431900   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.435012   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435482   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.435517   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435638   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.437865   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438267   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.438303   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438388   62050 provision.go:138] copyHostCerts
	I0103 20:13:29.438448   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:29.438461   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:29.438527   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:29.438625   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:29.438633   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:29.438653   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:29.438713   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:29.438720   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:29.438738   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:29.438787   62050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018788 san=[192.168.39.139 192.168.39.139 localhost 127.0.0.1 minikube default-k8s-diff-port-018788]
	I0103 20:13:29.494476   62050 provision.go:172] copyRemoteCerts
	I0103 20:13:29.494562   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:29.494590   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.497330   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497597   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.497623   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.497956   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.498139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.498268   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:29.583531   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:29.605944   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0103 20:13:29.630747   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:29.656325   62050 provision.go:86] duration metric: configureAuth took 224.741883ms
	I0103 20:13:29.656355   62050 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:29.656525   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:29.656619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.659656   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.660213   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660434   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.660643   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.660864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.661019   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.661217   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.661571   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.661588   62050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:29.970938   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:29.970966   62050 machine.go:91] provisioned docker machine in 799.385733ms
	I0103 20:13:29.970975   62050 start.go:300] post-start starting for "default-k8s-diff-port-018788" (driver="kvm2")
	I0103 20:13:29.970985   62050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:29.971007   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.971387   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:29.971418   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.974114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974487   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.974562   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974706   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.974894   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.975075   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.975227   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.061987   62050 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:30.066591   62050 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:30.066620   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:30.066704   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:30.066795   62050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:30.066899   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:30.076755   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:30.099740   62050 start.go:303] post-start completed in 128.750887ms
	I0103 20:13:30.099763   62050 fix.go:56] fixHost completed within 20.287967183s
	I0103 20:13:30.099782   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.102744   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.103177   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103409   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.103633   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.103846   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.104080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.104308   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:30.104680   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:30.104696   62050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:30.219120   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312810.161605674
	
	I0103 20:13:30.219145   62050 fix.go:206] guest clock: 1704312810.161605674
	I0103 20:13:30.219154   62050 fix.go:219] Guest: 2024-01-03 20:13:30.161605674 +0000 UTC Remote: 2024-01-03 20:13:30.099767061 +0000 UTC m=+264.645600185 (delta=61.838613ms)
	I0103 20:13:30.219191   62050 fix.go:190] guest clock delta is within tolerance: 61.838613ms
	I0103 20:13:30.219202   62050 start.go:83] releasing machines lock for "default-k8s-diff-port-018788", held for 20.407440359s
	I0103 20:13:30.219230   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.219551   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:30.222200   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222616   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.222650   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222811   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223411   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223568   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223643   62050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:30.223686   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.223940   62050 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:30.223970   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.226394   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226746   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.226777   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227274   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.227443   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227446   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227567   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227595   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.227739   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227972   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.315855   62050 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:30.359117   62050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:30.499200   62050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:30.505296   62050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:30.505768   62050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:30.520032   62050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:30.520059   62050 start.go:475] detecting cgroup driver to use...
	I0103 20:13:30.520146   62050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:30.532684   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:30.545152   62050 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:30.545222   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:30.558066   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:30.570999   62050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:30.682484   62050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:30.802094   62050 docker.go:219] disabling docker service ...
	I0103 20:13:30.802171   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:30.815796   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:30.827982   62050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:30.952442   62050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:31.068759   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:31.083264   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:31.102893   62050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:31.102979   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.112366   62050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:31.112433   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.122940   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.133385   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.144251   62050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:31.155210   62050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:31.164488   62050 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:31.164552   62050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:31.177632   62050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:31.186983   62050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:31.309264   62050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:31.493626   62050 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:31.493706   62050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:31.504103   62050 start.go:543] Will wait 60s for crictl version
	I0103 20:13:31.504187   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:13:31.507927   62050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:31.543967   62050 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:31.544046   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.590593   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.639562   62050 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:13:30.242808   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Start
	I0103 20:13:30.242991   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring networks are active...
	I0103 20:13:30.243776   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network default is active
	I0103 20:13:30.244126   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network mk-old-k8s-version-927922 is active
	I0103 20:13:30.244504   61400 main.go:141] libmachine: (old-k8s-version-927922) Getting domain xml...
	I0103 20:13:30.245244   61400 main.go:141] libmachine: (old-k8s-version-927922) Creating domain...
	I0103 20:13:31.553239   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting to get IP...
	I0103 20:13:31.554409   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.554942   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.555022   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.554922   63030 retry.go:31] will retry after 192.654673ms: waiting for machine to come up
	I0103 20:13:31.749588   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.750035   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.750058   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.750000   63030 retry.go:31] will retry after 270.810728ms: waiting for machine to come up
	I0103 20:13:32.022736   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.023310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.023337   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.023280   63030 retry.go:31] will retry after 327.320898ms: waiting for machine to come up
	I0103 20:13:32.352845   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.353453   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.353501   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.353395   63030 retry.go:31] will retry after 575.525231ms: waiting for machine to come up
	I0103 20:13:32.930217   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.930833   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.930859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.930741   63030 retry.go:31] will retry after 571.986596ms: waiting for machine to come up
	I0103 20:13:30.936363   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:32.939164   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:29.833307   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.833374   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.844819   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.333870   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.333936   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.345802   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.833281   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.833400   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.848469   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.334071   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.334151   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.346445   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.833944   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.834034   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.848925   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.333349   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.333432   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.349173   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.833632   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.833696   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.848186   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.333659   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.333757   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.349560   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.834221   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.834309   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.846637   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:34.334219   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.334299   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.350703   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.641182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:31.644371   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644677   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:31.644712   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644971   62050 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:31.649106   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:31.662256   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:13:31.662380   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:31.701210   62050 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:13:31.701275   62050 ssh_runner.go:195] Run: which lz4
	I0103 20:13:31.704890   62050 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:31.708756   62050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:31.708783   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:13:33.543202   62050 crio.go:444] Took 1.838336 seconds to copy over tarball
	I0103 20:13:33.543282   62050 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:33.504797   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:33.505336   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:33.505363   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:33.505286   63030 retry.go:31] will retry after 593.865088ms: waiting for machine to come up
	I0103 20:13:34.101055   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:34.101559   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:34.101593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:34.101507   63030 retry.go:31] will retry after 1.016460442s: waiting for machine to come up
	I0103 20:13:35.119877   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:35.120383   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:35.120415   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:35.120352   63030 retry.go:31] will retry after 1.462823241s: waiting for machine to come up
	I0103 20:13:36.585467   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:36.585968   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:36.585993   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:36.585932   63030 retry.go:31] will retry after 1.213807131s: waiting for machine to come up
	I0103 20:13:37.801504   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:37.801970   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:37.801999   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:37.801896   63030 retry.go:31] will retry after 1.961227471s: waiting for machine to come up
	I0103 20:13:35.435661   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:37.435870   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:34.834090   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.834160   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.848657   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.333723   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.333809   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.348582   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.834128   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.834208   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.845911   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.333385   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.333512   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.346391   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.833978   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.834054   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.847134   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.333698   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.333785   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.346411   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.834024   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.834141   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.846961   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.333461   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.333665   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.346713   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.834378   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.834470   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.848473   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.333266   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.333347   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.345638   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.345664   62015 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:39.345692   62015 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:39.345721   62015 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:39.345792   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:39.387671   62015 cri.go:89] found id: ""
	I0103 20:13:39.387778   62015 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:39.403523   62015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:39.413114   62015 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:39.413188   62015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421503   62015 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421527   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:39.561406   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:36.473303   62050 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.929985215s)
	I0103 20:13:36.473337   62050 crio.go:451] Took 2.930104 seconds to extract the tarball
	I0103 20:13:36.473350   62050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:36.513202   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:36.557201   62050 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:13:36.557231   62050 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:13:36.557314   62050 ssh_runner.go:195] Run: crio config
	I0103 20:13:36.618916   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:36.618948   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:36.618982   62050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:36.619007   62050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.139 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018788 NodeName:default-k8s-diff-port-018788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:36.619167   62050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.139
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:36.619242   62050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-018788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0103 20:13:36.619294   62050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:13:36.628488   62050 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:36.628571   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:36.637479   62050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0103 20:13:36.652608   62050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:13:36.667432   62050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0103 20:13:36.683138   62050 ssh_runner.go:195] Run: grep 192.168.39.139	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:36.687022   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:36.698713   62050 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788 for IP: 192.168.39.139
	I0103 20:13:36.698755   62050 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:36.698948   62050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:36.699009   62050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:36.699098   62050 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.key
	I0103 20:13:36.699157   62050 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key.7716debd
	I0103 20:13:36.699196   62050 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key
	I0103 20:13:36.699287   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:36.699314   62050 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:36.699324   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:36.699349   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:36.699370   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:36.699395   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:36.699434   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:36.700045   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:36.721872   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:13:36.744733   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:36.772245   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 20:13:36.796690   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:36.819792   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:36.843109   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:36.866679   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:36.889181   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:36.912082   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:36.935621   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:36.959090   62050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:36.974873   62050 ssh_runner.go:195] Run: openssl version
	I0103 20:13:36.980449   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:36.990278   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995822   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995903   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:37.001504   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:37.011628   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:37.021373   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025697   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025752   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.031286   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:37.041075   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:37.050789   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055584   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055647   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.061079   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:37.070792   62050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:37.075050   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:37.081170   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:37.087372   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:37.093361   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:37.099203   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:37.104932   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:37.110783   62050 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:37.110955   62050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:37.111003   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:37.146687   62050 cri.go:89] found id: ""
	I0103 20:13:37.146766   62050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:37.156789   62050 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:37.156808   62050 kubeadm.go:636] restartCluster start
	I0103 20:13:37.156882   62050 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:37.166168   62050 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.167346   62050 kubeconfig.go:92] found "default-k8s-diff-port-018788" server: "https://192.168.39.139:8444"
	I0103 20:13:37.169750   62050 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:37.178965   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.179035   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.190638   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.679072   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.679142   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.691149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.179709   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.179804   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.191656   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.679825   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.679912   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.693380   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.179927   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.180042   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.193368   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.679947   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.680049   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.692444   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:40.179510   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.179600   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.192218   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.764226   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:39.764651   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:39.764681   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:39.764592   63030 retry.go:31] will retry after 2.38598238s: waiting for machine to come up
	I0103 20:13:42.151992   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:42.152486   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:42.152517   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:42.152435   63030 retry.go:31] will retry after 3.320569317s: waiting for machine to come up
	I0103 20:13:39.438887   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:41.441552   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:40.707462   62015 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.146014282s)
	I0103 20:13:40.707501   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:40.913812   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.008294   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.093842   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:41.093931   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:41.594484   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.094333   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.594647   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.094744   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.594323   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.628624   62015 api_server.go:72] duration metric: took 2.534781213s to wait for apiserver process to appear ...
	I0103 20:13:43.628653   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:43.628674   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:40.679867   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.679959   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.692707   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.179865   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.179962   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.192901   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.679604   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.679668   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.691755   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.179959   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.180082   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.193149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.679682   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.679808   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.696777   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.179236   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.179343   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.195021   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.679230   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.679339   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.696886   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.179488   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.179558   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.194865   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.679087   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.679216   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.693383   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.179505   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.179607   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.190496   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.474145   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:45.474596   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:45.474623   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:45.474542   63030 retry.go:31] will retry after 3.652901762s: waiting for machine to come up
	I0103 20:13:43.937146   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:45.938328   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.941499   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.277935   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:47.277971   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:47.277988   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.543418   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.543449   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:47.629720   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.635340   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.635373   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.128849   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.135534   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:48.135576   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.628977   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.634609   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:13:48.643475   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:13:48.643505   62015 api_server.go:131] duration metric: took 5.01484434s to wait for apiserver health ...
	I0103 20:13:48.643517   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:48.643526   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:48.645945   62015 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:48.647556   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:48.671093   62015 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:48.698710   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:48.712654   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:48.712704   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:48.712717   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:48.712729   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:48.712739   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:48.712761   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:48.712771   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:48.712780   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:48.712793   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:48.712806   62015 system_pods.go:74] duration metric: took 14.071881ms to wait for pod list to return data ...
	I0103 20:13:48.712818   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:48.716271   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:48.716301   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:48.716326   62015 node_conditions.go:105] duration metric: took 3.496257ms to run NodePressure ...
	I0103 20:13:48.716348   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:49.020956   62015 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:49.025982   62015 kubeadm.go:787] kubelet initialised
	I0103 20:13:49.026003   62015 kubeadm.go:788] duration metric: took 5.022549ms waiting for restarted kubelet to initialise ...
	I0103 20:13:49.026010   62015 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:49.033471   62015 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.038777   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038806   62015 pod_ready.go:81] duration metric: took 5.286579ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.038823   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038834   62015 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.044324   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044349   62015 pod_ready.go:81] duration metric: took 5.506628ms waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.044357   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044363   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.049022   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049058   62015 pod_ready.go:81] duration metric: took 4.681942ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.049068   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049073   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.102378   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102407   62015 pod_ready.go:81] duration metric: took 53.323019ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.102415   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102424   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.504820   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504852   62015 pod_ready.go:81] duration metric: took 402.417876ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.504865   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504875   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.905230   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905265   62015 pod_ready.go:81] duration metric: took 400.380902ms waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.905278   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905287   62015 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:50.304848   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304883   62015 pod_ready.go:81] duration metric: took 399.567527ms waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:50.304896   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304905   62015 pod_ready.go:38] duration metric: took 1.278887327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:50.304926   62015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:50.331405   62015 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:50.331428   62015 kubeadm.go:640] restartCluster took 21.020194358s
	I0103 20:13:50.331439   62015 kubeadm.go:406] StartCluster complete in 21.075864121s
	I0103 20:13:50.331459   62015 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.331541   62015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:50.333553   62015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.333969   62015 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:50.334045   62015 addons.go:69] Setting storage-provisioner=true in profile "no-preload-749210"
	I0103 20:13:50.334064   62015 addons.go:237] Setting addon storage-provisioner=true in "no-preload-749210"
	W0103 20:13:50.334072   62015 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:50.334082   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:50.334121   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.334129   62015 addons.go:69] Setting default-storageclass=true in profile "no-preload-749210"
	I0103 20:13:50.334143   62015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-749210"
	I0103 20:13:50.334556   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334588   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334602   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334620   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334681   62015 addons.go:69] Setting metrics-server=true in profile "no-preload-749210"
	I0103 20:13:50.334708   62015 addons.go:237] Setting addon metrics-server=true in "no-preload-749210"
	I0103 20:13:50.334712   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	W0103 20:13:50.334717   62015 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:50.334756   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.335152   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.335190   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.343173   62015 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-749210" context rescaled to 1 replicas
	I0103 20:13:50.343213   62015 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:50.345396   62015 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:50.347721   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:50.353122   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I0103 20:13:50.353250   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0103 20:13:50.353274   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0103 20:13:50.353737   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.353896   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354283   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354299   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354488   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354491   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354588   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354889   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355115   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.355165   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.355181   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.355244   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355746   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.356199   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356239   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.356792   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356830   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.359095   62015 addons.go:237] Setting addon default-storageclass=true in "no-preload-749210"
	W0103 20:13:50.359114   62015 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:50.359139   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.359554   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.359595   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.377094   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0103 20:13:50.377218   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0103 20:13:50.377679   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.377779   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.378353   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378376   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378472   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378488   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378816   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.378874   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.381013   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.381240   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.389265   62015 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:50.383848   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I0103 20:13:50.391000   62015 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.391023   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:50.391049   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391062   62015 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:45.679265   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.679374   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.690232   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.179862   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.179963   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.190942   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.679624   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.679738   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.691578   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.179185   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:47.179280   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:47.193995   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.194029   62050 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:47.194050   62050 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:47.194061   62050 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:47.194114   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:47.235512   62050 cri.go:89] found id: ""
	I0103 20:13:47.235625   62050 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:47.251115   62050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:47.261566   62050 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:47.261631   62050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271217   62050 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271244   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:47.408550   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.262356   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.492357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.597607   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.699097   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:48.699194   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.199349   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.699758   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.199818   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.392557   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:50.392577   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:50.392597   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391469   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.393835   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.393854   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.394340   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.394967   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395384   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.395419   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.395602   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.395663   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.395683   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.395981   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.396173   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.398544   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399117   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.399142   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399363   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.399582   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.399692   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.399761   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.434719   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I0103 20:13:50.435279   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.435938   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.435972   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.436407   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.436630   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.438992   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.442816   62015 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.442835   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:50.442856   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.450157   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.451549   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.451575   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.451571   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.453023   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.453577   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.453753   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.556135   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:50.556161   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:50.583620   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:50.583643   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:50.589708   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.614203   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.631936   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.631961   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:50.708658   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.772364   62015 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:50.772434   62015 node_ready.go:35] waiting up to 6m0s for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:51.785361   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195620446s)
	I0103 20:13:51.785407   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785421   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785427   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171187695s)
	I0103 20:13:51.785463   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785488   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785603   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076908391s)
	I0103 20:13:51.785687   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785717   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.785730   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.785739   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785741   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785748   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785819   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785837   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786108   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786143   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786152   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786166   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.786178   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786444   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786495   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786536   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786553   62015 addons.go:473] Verifying addon metrics-server=true in "no-preload-749210"
	I0103 20:13:51.787346   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787365   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787376   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.787386   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.787596   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787638   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787652   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787855   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787859   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787871   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.797560   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.797584   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.797860   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.797874   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.800087   62015 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0103 20:13:49.131462   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132013   61400 main.go:141] libmachine: (old-k8s-version-927922) Found IP for machine: 192.168.72.12
	I0103 20:13:49.132041   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserving static IP address...
	I0103 20:13:49.132059   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has current primary IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132507   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.132543   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | skip adding static IP to network mk-old-k8s-version-927922 - found existing host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"}
	I0103 20:13:49.132560   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserved static IP address: 192.168.72.12
	I0103 20:13:49.132582   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting for SSH to be available...
	I0103 20:13:49.132597   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Getting to WaitForSSH function...
	I0103 20:13:49.135129   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135499   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.135536   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135703   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH client type: external
	I0103 20:13:49.135728   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa (-rw-------)
	I0103 20:13:49.135765   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:49.135780   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | About to run SSH command:
	I0103 20:13:49.135796   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | exit 0
	I0103 20:13:49.226568   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:49.226890   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetConfigRaw
	I0103 20:13:49.227536   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.230668   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231038   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.231064   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231277   61400 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/config.json ...
	I0103 20:13:49.231456   61400 machine.go:88] provisioning docker machine ...
	I0103 20:13:49.231473   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:49.231708   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.231862   61400 buildroot.go:166] provisioning hostname "old-k8s-version-927922"
	I0103 20:13:49.231885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.232002   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.234637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235012   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.235048   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235196   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.235338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235445   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235543   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.235748   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.236196   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.236226   61400 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-927922 && echo "old-k8s-version-927922" | sudo tee /etc/hostname
	I0103 20:13:49.377588   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-927922
	
	I0103 20:13:49.377625   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.381244   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381634   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.381680   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.382115   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382311   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382538   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.382721   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.383096   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.383125   61400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-927922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-927922/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-927922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:49.517214   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:49.517246   61400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:49.517268   61400 buildroot.go:174] setting up certificates
	I0103 20:13:49.517280   61400 provision.go:83] configureAuth start
	I0103 20:13:49.517299   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.517606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.520819   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521255   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.521284   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521442   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.523926   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.524364   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524495   61400 provision.go:138] copyHostCerts
	I0103 20:13:49.524604   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:49.524618   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:49.524714   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:49.524842   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:49.524855   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:49.524885   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:49.524982   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:49.525020   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:49.525063   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:49.525143   61400 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-927922 san=[192.168.72.12 192.168.72.12 localhost 127.0.0.1 minikube old-k8s-version-927922]
	I0103 20:13:49.896621   61400 provision.go:172] copyRemoteCerts
	I0103 20:13:49.896687   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:49.896728   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.899859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900239   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.900274   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900456   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.900690   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.900873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.901064   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:49.993569   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:13:50.017597   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:13:50.041139   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:50.064499   61400 provision.go:86] duration metric: configureAuth took 547.178498ms
	I0103 20:13:50.064533   61400 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:50.064770   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:13:50.064848   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.068198   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.068672   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.069080   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069284   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069457   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.069640   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.070115   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.070146   61400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:50.450845   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:50.450873   61400 machine.go:91] provisioned docker machine in 1.219404511s
	I0103 20:13:50.450886   61400 start.go:300] post-start starting for "old-k8s-version-927922" (driver="kvm2")
	I0103 20:13:50.450899   61400 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:50.450924   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.451263   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:50.451328   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.455003   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455413   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.455436   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455644   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.455796   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.455919   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.456031   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.563846   61400 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:50.569506   61400 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:50.569532   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:50.569626   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:50.569726   61400 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:50.569857   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:50.581218   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:50.612328   61400 start.go:303] post-start completed in 161.425373ms
	I0103 20:13:50.612359   61400 fix.go:56] fixHost completed within 20.392994827s
	I0103 20:13:50.612383   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.615776   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616241   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.616268   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616368   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.616655   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.616849   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.617088   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.617286   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.617764   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.617791   61400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:50.740437   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312830.691065491
	
	I0103 20:13:50.740506   61400 fix.go:206] guest clock: 1704312830.691065491
	I0103 20:13:50.740528   61400 fix.go:219] Guest: 2024-01-03 20:13:50.691065491 +0000 UTC Remote: 2024-01-03 20:13:50.612363446 +0000 UTC m=+357.606588552 (delta=78.702045ms)
	I0103 20:13:50.740563   61400 fix.go:190] guest clock delta is within tolerance: 78.702045ms
	I0103 20:13:50.740574   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 20.521248173s
	I0103 20:13:50.740606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.740879   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:50.743952   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744357   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.744397   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744668   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.745932   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746302   61400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:50.746343   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.746759   61400 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:50.746784   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.749593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.749994   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.750029   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.750496   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.750738   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.750900   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.751141   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.751696   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.751779   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.751842   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.751898   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.751960   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.752031   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.752063   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.841084   61400 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:50.882564   61400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:51.041188   61400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:51.049023   61400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:51.049103   61400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:51.068267   61400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:51.068297   61400 start.go:475] detecting cgroup driver to use...
	I0103 20:13:51.068371   61400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:51.086266   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:51.101962   61400 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:51.102030   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:51.118269   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:51.134642   61400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:51.310207   61400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:51.495609   61400 docker.go:219] disabling docker service ...
	I0103 20:13:51.495743   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:51.512101   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:51.527244   61400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:51.696874   61400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:51.836885   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:51.849905   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:51.867827   61400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0103 20:13:51.867895   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.877598   61400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:51.877713   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.886744   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.898196   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.910021   61400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:51.921882   61400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:51.930668   61400 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:51.930727   61400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:51.943294   61400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:51.952273   61400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:52.065108   61400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:52.272042   61400 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:52.272143   61400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:52.277268   61400 start.go:543] Will wait 60s for crictl version
	I0103 20:13:52.277436   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:52.281294   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:52.334056   61400 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:52.334231   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.390900   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.454400   61400 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0103 20:13:52.455682   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:52.459194   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.459656   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:52.459683   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.460250   61400 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:52.465579   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:52.480500   61400 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 20:13:52.480620   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:52.532378   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:52.532450   61400 ssh_runner.go:195] Run: which lz4
	I0103 20:13:52.537132   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:52.541880   61400 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:52.541912   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0103 20:13:50.443235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:52.942235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:51.801673   62015 addons.go:508] enable addons completed in 1.467711333s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0103 20:13:52.779944   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.699945   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.199773   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.227739   62050 api_server.go:72] duration metric: took 2.52863821s to wait for apiserver process to appear ...
	I0103 20:13:51.227768   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:51.227789   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:51.228288   62050 api_server.go:269] stopped: https://192.168.39.139:8444/healthz: Get "https://192.168.39.139:8444/healthz": dial tcp 192.168.39.139:8444: connect: connection refused
	I0103 20:13:51.728906   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.679221   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.679255   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.679273   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.722466   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.722528   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.728699   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.771739   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:55.771841   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.228041   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.234578   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.234618   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.728122   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.734464   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.734505   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:57.228124   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:57.239527   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:13:57.253416   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:57.253445   62050 api_server.go:131] duration metric: took 6.025669125s to wait for apiserver health ...
	I0103 20:13:57.253456   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:57.253464   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:57.255608   62050 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:54.091654   61400 crio.go:444] Took 1.554550 seconds to copy over tarball
	I0103 20:13:54.091734   61400 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:57.252728   61400 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.160960283s)
	I0103 20:13:57.252762   61400 crio.go:451] Took 3.161068 seconds to extract the tarball
	I0103 20:13:57.252773   61400 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:57.307431   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:57.362170   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:57.362199   61400 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:57.362266   61400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.362306   61400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.362491   61400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.362505   61400 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0103 20:13:57.362630   61400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.362663   61400 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.362749   61400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.362830   61400 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364964   61400 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364981   61400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.364999   61400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.365049   61400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.365081   61400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.365159   61400 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0103 20:13:57.365337   61400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.365364   61400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.585886   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.611291   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0103 20:13:57.622467   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.623443   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.627321   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.630211   61400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0103 20:13:57.630253   61400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.630299   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.647358   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.670079   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.724516   61400 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0103 20:13:57.724560   61400 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0103 20:13:57.724606   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.747338   61400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0103 20:13:57.747387   61400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.747451   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767682   61400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0103 20:13:57.767741   61400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.767749   61400 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0103 20:13:57.767772   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.767782   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767778   61400 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.767834   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811841   61400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0103 20:13:57.811895   61400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.811861   61400 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0103 20:13:57.811948   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811984   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0103 20:13:57.811948   61400 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.812053   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.812098   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.812128   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.849648   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0103 20:13:57.849722   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.916421   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0103 20:13:57.916483   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.916529   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0103 20:13:57.936449   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.936474   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0103 20:13:57.936485   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0103 20:13:57.936538   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0103 20:13:55.436957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:57.441634   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:55.278078   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:57.280673   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:58.185787   62015 node_ready.go:49] node "no-preload-749210" has status "Ready":"True"
	I0103 20:13:58.185819   62015 node_ready.go:38] duration metric: took 7.413368774s waiting for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:58.185837   62015 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:58.196599   62015 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203024   62015 pod_ready.go:92] pod "coredns-76f75df574-rbx58" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:58.203047   62015 pod_ready.go:81] duration metric: took 6.423108ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203057   62015 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:57.257123   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:57.293641   62050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:57.341721   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:57.360995   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:57.361054   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:57.361065   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:57.361109   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:57.361132   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:57.361147   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:57.361171   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:57.361189   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:57.361198   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:57.361207   62050 system_pods.go:74] duration metric: took 19.402129ms to wait for pod list to return data ...
	I0103 20:13:57.361218   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:57.369396   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:57.369435   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:57.369449   62050 node_conditions.go:105] duration metric: took 8.224276ms to run NodePressure ...
	I0103 20:13:57.369470   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:57.615954   62050 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624280   62050 kubeadm.go:787] kubelet initialised
	I0103 20:13:57.624312   62050 kubeadm.go:788] duration metric: took 8.328431ms waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624321   62050 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:57.637920   62050 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.734401   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734439   62050 pod_ready.go:81] duration metric: took 1.096478242s waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:58.734454   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734463   62050 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:59.605120   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605156   62050 pod_ready.go:81] duration metric: took 870.676494ms waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:59.605168   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605174   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.176543   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176583   62050 pod_ready.go:81] duration metric: took 571.400586ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.176599   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176608   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.201556   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201620   62050 pod_ready.go:81] duration metric: took 24.987825ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.201637   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201647   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.233069   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233108   62050 pod_ready.go:81] duration metric: took 31.451633ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.233127   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233135   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.253505   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253534   62050 pod_ready.go:81] duration metric: took 20.386039ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.253550   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253559   62050 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.272626   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272661   62050 pod_ready.go:81] duration metric: took 19.09311ms waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.272677   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272687   62050 pod_ready.go:38] duration metric: took 2.64835186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:00.272705   62050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:00.321126   62050 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:00.321189   62050 kubeadm.go:640] restartCluster took 23.164374098s
	I0103 20:14:00.321205   62050 kubeadm.go:406] StartCluster complete in 23.210428007s
	I0103 20:14:00.321226   62050 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.321322   62050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:00.323470   62050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.323925   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:00.324242   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:14:00.324381   62050 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:00.324467   62050 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.324487   62050 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.324495   62050 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:00.324536   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.324984   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325013   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325285   62050 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325304   62050 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325329   62050 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.325337   62050 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:00.325376   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.325309   62050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018788"
	I0103 20:14:00.325722   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325740   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325935   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.326021   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.347496   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0103 20:14:00.347895   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.348392   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.348415   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.348728   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.349192   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.349228   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.349916   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0103 20:14:00.350369   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.351043   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.351067   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.351579   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.352288   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.352392   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.358540   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0103 20:14:00.359079   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.359582   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.359607   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.359939   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.360114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.364583   62050 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.364614   62050 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:00.364645   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.365032   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.365080   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.365268   62050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-018788" context rescaled to 1 replicas
	I0103 20:14:00.365315   62050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:00.367628   62050 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:00.376061   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:00.382421   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0103 20:14:00.382601   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0103 20:14:00.382708   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0103 20:14:00.383285   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383310   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383855   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.383862   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.384200   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384674   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.384701   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.384740   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384914   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.386513   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.387010   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.387325   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.387343   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.389302   62050 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:00.390931   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:00.390945   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:00.390960   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.390651   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.392318   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.394641   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395185   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.395212   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395483   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.395954   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.398448   62050 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:00.400431   62050 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.400454   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:00.400476   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.404480   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405112   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.405145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.405971   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.407610   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.407808   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.410796   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.410964   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.411436   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.417626   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0103 20:14:00.418201   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.422710   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.422743   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.423232   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.423421   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.425364   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.425678   62050 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.425697   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:00.425717   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.429190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429720   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.429745   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429898   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.430599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.430803   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.430946   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.621274   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:00.621356   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:00.641979   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.681414   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.682076   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:00.682118   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:00.760063   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.760095   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:00.833648   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.840025   62050 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:00.840147   62050 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.78156374s)
	I0103 20:14:02.423631   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423646   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.742133551s)
	I0103 20:14:02.423765   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423784   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423889   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.423906   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.423920   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423930   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424042   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424061   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424078   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.424076   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.424104   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424125   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424137   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424472   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424489   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424502   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.431339   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.431368   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.431754   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.431789   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.431809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.575829   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742131608s)
	I0103 20:14:02.575880   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.575899   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576351   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576374   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576391   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.576400   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576619   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576632   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576641   62050 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-018788"
	I0103 20:14:02.578918   62050 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:13:58.180342   61400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0103 20:13:58.180407   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0103 20:13:58.180464   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0103 20:13:58.194447   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:58.726157   61400 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0103 20:13:58.726232   61400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0103 20:14:00.187852   61400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.461700942s)
	I0103 20:14:00.187973   61400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461718478s)
	I0103 20:14:00.188007   61400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0103 20:14:00.188104   61400 cache_images.go:92] LoadImages completed in 2.825887616s
	W0103 20:14:00.188202   61400 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0103 20:14:00.188285   61400 ssh_runner.go:195] Run: crio config
	I0103 20:14:00.270343   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:00.270372   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:00.270393   61400 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:14:00.270416   61400 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.12 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-927922 NodeName:old-k8s-version-927922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 20:14:00.270624   61400 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-927922"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-927922
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.12:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:14:00.270765   61400 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-927922 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:14:00.270842   61400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0103 20:14:00.282011   61400 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:14:00.282093   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:14:00.292954   61400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0103 20:14:00.314616   61400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:14:00.366449   61400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0103 20:14:00.406579   61400 ssh_runner.go:195] Run: grep 192.168.72.12	control-plane.minikube.internal$ /etc/hosts
	I0103 20:14:00.410923   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:14:00.430315   61400 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922 for IP: 192.168.72.12
	I0103 20:14:00.430352   61400 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.430553   61400 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:14:00.430619   61400 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:14:00.430718   61400 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.key
	I0103 20:14:00.430798   61400 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key.9a91cab3
	I0103 20:14:00.430854   61400 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key
	I0103 20:14:00.431018   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:14:00.431071   61400 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:14:00.431083   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:14:00.431123   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:14:00.431158   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:14:00.431195   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:14:00.431250   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:14:00.432123   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:14:00.472877   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:14:00.505153   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:14:00.533850   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:14:00.564548   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:14:00.596464   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:14:00.626607   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:14:00.655330   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:14:00.681817   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:14:00.711039   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:14:00.742406   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:14:00.768583   61400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:14:00.786833   61400 ssh_runner.go:195] Run: openssl version
	I0103 20:14:00.793561   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:14:00.807558   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812755   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812816   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.820657   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:14:00.832954   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:14:00.844707   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850334   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850425   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.856592   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:14:00.868105   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:14:00.881551   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886462   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886550   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.892487   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:14:00.904363   61400 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:14:00.909429   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:14:00.915940   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:14:00.922496   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:14:00.928504   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:14:00.936016   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:14:00.943008   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:14:00.949401   61400 kubeadm.go:404] StartCluster: {Name:old-k8s-version-927922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:14:00.949524   61400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:14:00.949614   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:00.999406   61400 cri.go:89] found id: ""
	I0103 20:14:00.999494   61400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:14:01.011041   61400 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:14:01.011063   61400 kubeadm.go:636] restartCluster start
	I0103 20:14:01.011130   61400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:14:01.024488   61400 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.026094   61400 kubeconfig.go:92] found "old-k8s-version-927922" server: "https://192.168.72.12:8443"
	I0103 20:14:01.029577   61400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:14:01.041599   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.041674   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.055545   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.542034   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.542135   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.554826   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.042049   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.042166   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.056693   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.542275   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.542363   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.557025   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:03.041864   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.041968   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.054402   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:59.937077   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.440275   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.287822   62015 pod_ready.go:102] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.712464   62015 pod_ready.go:92] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.712486   62015 pod_ready.go:81] duration metric: took 2.509421629s waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.712494   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722133   62015 pod_ready.go:92] pod "kube-apiserver-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.722175   62015 pod_ready.go:81] duration metric: took 9.671952ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722188   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728860   62015 pod_ready.go:92] pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.728888   62015 pod_ready.go:81] duration metric: took 6.691622ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728901   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736669   62015 pod_ready.go:92] pod "kube-proxy-5hwf4" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.736690   62015 pod_ready.go:81] duration metric: took 7.783204ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736699   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245720   62015 pod_ready.go:92] pod "kube-scheduler-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:02.245750   62015 pod_ready.go:81] duration metric: took 1.509042822s waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245764   62015 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:04.253082   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.580440   62050 addons.go:508] enable addons completed in 2.256058454s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:02.845486   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:05.343961   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:03.542326   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.542407   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.554128   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.041685   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.041779   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.053727   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.542332   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.542417   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.554478   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.042026   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.042120   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.055763   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.541892   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.541996   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.554974   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.042576   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.042675   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.055902   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.542543   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.542636   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.555494   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.041757   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.041844   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.053440   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.542083   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.542162   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.555336   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:08.041841   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.041929   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.055229   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.936356   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.938795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.754049   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:09.253568   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.345058   62050 node_ready.go:49] node "default-k8s-diff-port-018788" has status "Ready":"True"
	I0103 20:14:06.345083   62050 node_ready.go:38] duration metric: took 5.505020144s waiting for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:06.345094   62050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:06.351209   62050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357786   62050 pod_ready.go:92] pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:06.357811   62050 pod_ready.go:81] duration metric: took 6.576128ms waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357819   62050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:08.365570   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.366402   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:08.542285   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.542428   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.554155   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.041695   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.041800   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.054337   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.541733   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.541817   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.554231   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.041785   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.041863   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.053870   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.541893   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.541988   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.554220   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.042579   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:11.042662   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:11.054683   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.054717   61400 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:14:11.054728   61400 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:14:11.054738   61400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:14:11.054804   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:11.099741   61400 cri.go:89] found id: ""
	I0103 20:14:11.099806   61400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:14:11.115939   61400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:14:11.125253   61400 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:14:11.125309   61400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134126   61400 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134151   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:11.244373   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.026578   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.238755   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.326635   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.411494   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:12.411597   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:12.912324   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:09.437304   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.937833   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.755341   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.254295   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.864860   62050 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.864892   62050 pod_ready.go:81] duration metric: took 4.507065243s waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.864906   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871510   62050 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.871532   62050 pod_ready.go:81] duration metric: took 6.618246ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871542   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877385   62050 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.877411   62050 pod_ready.go:81] duration metric: took 5.859396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877423   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883355   62050 pod_ready.go:92] pod "kube-proxy-wqjlv" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.883381   62050 pod_ready.go:81] duration metric: took 5.949857ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883391   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888160   62050 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.888186   62050 pod_ready.go:81] duration metric: took 4.782893ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888198   62050 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:12.896310   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.897306   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:13.412544   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.912006   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.939301   61400 api_server.go:72] duration metric: took 1.527807222s to wait for apiserver process to appear ...
	I0103 20:14:13.939328   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:13.939357   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:13.941001   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.438272   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.752567   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.758446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:17.397429   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:19.399199   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.940403   61400 api_server.go:269] stopped: https://192.168.72.12:8443/healthz: Get "https://192.168.72.12:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0103 20:14:18.940444   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.563874   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.563907   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.563925   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.591366   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.591397   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.939684   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.951743   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:19.951795   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.439712   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.448251   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:20.448289   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.939773   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.946227   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:20.954666   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:20.954702   61400 api_server.go:131] duration metric: took 7.015366394s to wait for apiserver health ...
	I0103 20:14:20.954718   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:20.954726   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:20.956786   61400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:14:20.958180   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:14:20.969609   61400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:14:20.986353   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:20.996751   61400 system_pods.go:59] 8 kube-system pods found
	I0103 20:14:20.996786   61400 system_pods.go:61] "coredns-5644d7b6d9-99qhg" [d43c98b2-5ed4-42a7-bdb9-28f5b3c7b99f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:14:20.996795   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:20.996804   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:20.996811   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:20.996821   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Pending
	I0103 20:14:20.996828   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:20.996835   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:20.996845   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:20.996857   61400 system_pods.go:74] duration metric: took 10.474644ms to wait for pod list to return data ...
	I0103 20:14:20.996870   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:21.000635   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:21.000665   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:21.000677   61400 node_conditions.go:105] duration metric: took 3.80125ms to run NodePressure ...
	I0103 20:14:21.000698   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:21.233310   61400 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241408   61400 kubeadm.go:787] kubelet initialised
	I0103 20:14:21.241445   61400 kubeadm.go:788] duration metric: took 8.096237ms waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241456   61400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:21.251897   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.264624   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264657   61400 pod_ready.go:81] duration metric: took 12.728783ms waiting for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.264670   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264700   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.282371   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282400   61400 pod_ready.go:81] duration metric: took 17.657706ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.282410   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282416   61400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.288986   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289016   61400 pod_ready.go:81] duration metric: took 6.590018ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.289028   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289036   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.391318   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391358   61400 pod_ready.go:81] duration metric: took 102.309139ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.391371   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391390   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.790147   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790184   61400 pod_ready.go:81] duration metric: took 398.776559ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.790202   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790213   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.190088   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190118   61400 pod_ready.go:81] duration metric: took 399.895826ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.190132   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190146   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.590412   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590470   61400 pod_ready.go:81] duration metric: took 400.308646ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.590484   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590494   61400 pod_ready.go:38] duration metric: took 1.349028144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:22.590533   61400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:22.610035   61400 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:22.610060   61400 kubeadm.go:640] restartCluster took 21.598991094s
	I0103 20:14:22.610071   61400 kubeadm.go:406] StartCluster complete in 21.660680377s
	I0103 20:14:22.610091   61400 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.610178   61400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:22.613053   61400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.613314   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:22.613472   61400 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:22.613563   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:14:22.613570   61400 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613584   61400 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613597   61400 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613625   61400 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-927922"
	W0103 20:14:22.613637   61400 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:22.613639   61400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-927922"
	I0103 20:14:22.613605   61400 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-927922"
	W0103 20:14:22.613706   61400 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:22.613769   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.613691   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.614097   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614129   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614170   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614204   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614293   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614334   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.631032   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0103 20:14:22.631689   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.632149   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.632172   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.632553   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.632811   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0103 20:14:22.632820   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0103 20:14:22.633222   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633340   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.633352   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633385   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.633695   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.633719   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634106   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634117   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.634139   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634544   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634711   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.634782   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.634821   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.639076   61400 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-927922"
	W0103 20:14:22.639233   61400 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:22.639274   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.640636   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.640703   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.653581   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0103 20:14:22.654135   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.654693   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.654720   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.655050   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.655267   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.655611   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45149
	I0103 20:14:22.656058   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.656503   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.656527   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.656976   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.657189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.657904   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.660090   61400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:22.659044   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.659283   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0103 20:14:22.663010   61400 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.663022   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:22.663037   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.664758   61400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:22.663341   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.665665   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666177   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.666201   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666255   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:22.666266   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:22.666282   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.666382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.666505   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.666726   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.666884   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.666901   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.666926   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.667344   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.667940   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.667983   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.668718   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.668933   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.668961   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.669116   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.669262   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.669388   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.669506   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.711545   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0103 20:14:22.711969   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.712493   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.712519   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.712853   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.713077   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.715086   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.715371   61400 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.715390   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:22.715405   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.718270   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718638   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.718671   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718876   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.719076   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.719263   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.719451   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.795601   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.887631   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:22.887660   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:22.889717   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.932293   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:22.932324   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:22.939480   61400 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:22.979425   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:22.979455   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:23.017495   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:23.255786   61400 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-927922" context rescaled to 1 replicas
	I0103 20:14:23.255832   61400 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:23.257785   61400 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:18.937821   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.435750   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.438082   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.259380   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:23.430371   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430402   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430532   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430557   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430710   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.430741   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.430778   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.430798   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430806   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432333   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432345   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432353   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432363   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432373   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.432382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432383   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432394   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432602   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432654   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432674   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438313   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.438335   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.438566   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.438585   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438662   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.598304   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598363   61400 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:23.598669   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598687   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598696   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598705   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598917   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598938   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598960   61400 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-927922"
	I0103 20:14:23.601038   61400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:14:21.253707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.254276   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.399352   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.895781   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.602562   61400 addons.go:508] enable addons completed in 989.095706ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:25.602268   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:27.602561   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:25.439366   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:27.934938   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:25.753982   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.253688   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:26.398696   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.896789   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:29.603040   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:30.102640   61400 node_ready.go:49] node "old-k8s-version-927922" has status "Ready":"True"
	I0103 20:14:30.102663   61400 node_ready.go:38] duration metric: took 6.504277703s waiting for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:30.102672   61400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.107593   61400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112792   61400 pod_ready.go:92] pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.112817   61400 pod_ready.go:81] duration metric: took 5.195453ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112828   61400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117802   61400 pod_ready.go:92] pod "etcd-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.117827   61400 pod_ready.go:81] duration metric: took 4.989616ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117839   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123548   61400 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.123570   61400 pod_ready.go:81] duration metric: took 5.723206ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123580   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128232   61400 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.128257   61400 pod_ready.go:81] duration metric: took 4.670196ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128269   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503735   61400 pod_ready.go:92] pod "kube-proxy-jk7jw" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.503782   61400 pod_ready.go:81] duration metric: took 375.504442ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503796   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903117   61400 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.903145   61400 pod_ready.go:81] duration metric: took 399.341883ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903155   61400 pod_ready.go:38] duration metric: took 800.474934ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.903167   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:30.903215   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:30.917506   61400 api_server.go:72] duration metric: took 7.661640466s to wait for apiserver process to appear ...
	I0103 20:14:30.917537   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:30.917558   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:30.923921   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:30.924810   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:30.924830   61400 api_server.go:131] duration metric: took 7.286806ms to wait for apiserver health ...
	I0103 20:14:30.924837   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:31.105108   61400 system_pods.go:59] 7 kube-system pods found
	I0103 20:14:31.105140   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.105144   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.105149   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.105153   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.105156   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.105160   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.105164   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.105168   61400 system_pods.go:74] duration metric: took 180.326535ms to wait for pod list to return data ...
	I0103 20:14:31.105176   61400 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:14:31.303919   61400 default_sa.go:45] found service account: "default"
	I0103 20:14:31.303945   61400 default_sa.go:55] duration metric: took 198.763782ms for default service account to be created ...
	I0103 20:14:31.303952   61400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:14:31.504913   61400 system_pods.go:86] 7 kube-system pods found
	I0103 20:14:31.504942   61400 system_pods.go:89] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.504948   61400 system_pods.go:89] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.504952   61400 system_pods.go:89] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.504960   61400 system_pods.go:89] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.504964   61400 system_pods.go:89] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.504967   61400 system_pods.go:89] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.504971   61400 system_pods.go:89] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.504978   61400 system_pods.go:126] duration metric: took 201.020363ms to wait for k8s-apps to be running ...
	I0103 20:14:31.504987   61400 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:14:31.505042   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:31.519544   61400 system_svc.go:56] duration metric: took 14.547054ms WaitForService to wait for kubelet.
	I0103 20:14:31.519581   61400 kubeadm.go:581] duration metric: took 8.263723255s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:14:31.519604   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:31.703367   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:31.703393   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:31.703402   61400 node_conditions.go:105] duration metric: took 183.794284ms to run NodePressure ...
	I0103 20:14:31.703413   61400 start.go:228] waiting for startup goroutines ...
	I0103 20:14:31.703419   61400 start.go:233] waiting for cluster config update ...
	I0103 20:14:31.703427   61400 start.go:242] writing updated cluster config ...
	I0103 20:14:31.703726   61400 ssh_runner.go:195] Run: rm -f paused
	I0103 20:14:31.752491   61400 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0103 20:14:31.754609   61400 out.go:177] 
	W0103 20:14:31.756132   61400 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0103 20:14:31.757531   61400 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0103 20:14:31.758908   61400 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-927922" cluster and "default" namespace by default
	I0103 20:14:29.937557   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.437025   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.253875   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.752584   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.898036   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:33.398935   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.936535   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.436533   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.753233   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.252419   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.253992   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:35.896170   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.897520   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:40.397608   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.438748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.439514   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.254480   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.756719   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:42.397869   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:44.398305   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.935597   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:45.936279   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:47.939184   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.253445   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:48.254497   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.896653   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:49.395106   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.436008   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:52.436929   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.754391   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.253984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:51.396664   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.895621   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:54.937380   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.435980   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:55.254262   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.254379   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:56.399473   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:58.895378   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.436517   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:01.436644   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:03.437289   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.754343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.256605   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:00.896080   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.896456   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.396614   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.935218   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.936528   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:04.753320   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:06.753702   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:08.754470   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.909774   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.398298   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.435847   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.436285   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.755735   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:13.260340   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.898368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.395141   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:14.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:16.437752   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.753850   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.252984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:17.396224   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:19.396412   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.935744   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.936627   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:22.937157   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.753996   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.252893   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:21.396466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.396556   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.435441   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.437177   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.253294   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.257573   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.895526   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.897999   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:30.396749   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.935811   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:31.936769   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.754895   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.252296   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.252439   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.398706   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.895914   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.435649   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.435937   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.253151   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.753045   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.897764   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:39.395522   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.935209   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.935922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:42.936185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.753242   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.254160   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:41.395722   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.895476   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:44.938043   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.436185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.753607   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.757575   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.895628   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.898831   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.395366   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:49.437057   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:51.936658   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.254313   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.754096   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.396047   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:54.896005   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:53.937359   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.939092   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:58.435858   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.253159   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:57.752873   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:56.897368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.397094   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:00.937099   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:02.937220   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.753924   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.754227   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:04.253189   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.895645   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:03.895950   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:05.435964   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:07.437247   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.753405   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.252564   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.395775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:08.397119   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.937945   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:12.436531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:11.254482   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.753409   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:10.898350   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.397549   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:14.936753   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.438482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.753689   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:18.253420   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.895365   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.897998   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.898464   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.935559   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:21.935664   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:20.253748   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.253878   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.254457   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.395466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.400100   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:23.935958   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:25.936631   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:28.436748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.752881   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.253740   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.897228   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.396925   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:30.436921   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:32.939573   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.254681   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.759891   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.895948   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.899819   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:35.436828   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:37.437536   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.252972   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.254083   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.396572   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.895816   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:39.440085   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:41.939589   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.752960   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:42.753342   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.897788   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:43.396277   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.437295   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:46.934854   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.753613   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.253118   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:45.896539   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.897012   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:50.399452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:48.936795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:51.435353   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:53.436742   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:49.753890   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.252908   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.253390   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.895504   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.896960   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:55.937358   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.435997   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.256446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.754312   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.898710   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.899652   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:00.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:02.936336   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.254343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.754483   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.398833   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.896269   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.437531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.935848   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.755471   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.756171   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.897369   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:08.397436   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:09.936237   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:11.940482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.253599   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:12.254176   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.254316   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.898370   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:13.400421   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.436922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.936283   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.753503   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.253120   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:15.896003   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:18.396552   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.438479   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.936957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.253522   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.752947   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:20.895961   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.395452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:24.435005   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437797   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437828   61676 pod_ready.go:81] duration metric: took 4m0.009294112s waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:17:26.437841   61676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:17:26.437850   61676 pod_ready.go:38] duration metric: took 4m1.606787831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:17:26.437868   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:17:26.437901   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:26.437951   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:26.499917   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.499942   61676 cri.go:89] found id: ""
	I0103 20:17:26.499958   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:26.500014   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.504239   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:26.504290   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:26.539965   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.539992   61676 cri.go:89] found id: ""
	I0103 20:17:26.540001   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:26.540052   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.544591   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:26.544667   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:26.583231   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:26.583256   61676 cri.go:89] found id: ""
	I0103 20:17:26.583265   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:26.583328   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.587642   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:26.587705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:26.625230   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.625258   61676 cri.go:89] found id: ""
	I0103 20:17:26.625267   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:26.625329   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.629448   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:26.629527   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:26.666698   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:26.666726   61676 cri.go:89] found id: ""
	I0103 20:17:26.666736   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:26.666796   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.671434   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:26.671500   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:26.703900   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:26.703921   61676 cri.go:89] found id: ""
	I0103 20:17:26.703929   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:26.703986   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.707915   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:26.707979   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:26.747144   61676 cri.go:89] found id: ""
	I0103 20:17:26.747168   61676 logs.go:284] 0 containers: []
	W0103 20:17:26.747182   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:26.747189   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:26.747246   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:26.786418   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:26.786441   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:26.786448   61676 cri.go:89] found id: ""
	I0103 20:17:26.786456   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:26.786515   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.790506   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.794304   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:26.794330   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:26.851272   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:26.851317   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.894480   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:26.894508   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.941799   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:26.941826   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.981759   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:26.981793   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:27.021318   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:27.021347   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:27.061320   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:27.061351   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:27.110137   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:27.110169   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:27.123548   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:27.123582   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:27.162644   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:27.162678   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:27.211599   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:27.211636   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:27.361299   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:27.361329   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:27.866123   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:27.866166   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:25.753957   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:27.754559   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:25.896204   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:28.395594   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.418870   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:17:30.433778   61676 api_server.go:72] duration metric: took 4m12.637164197s to wait for apiserver process to appear ...
	I0103 20:17:30.433801   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:17:30.433838   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:30.433911   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:30.472309   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:30.472337   61676 cri.go:89] found id: ""
	I0103 20:17:30.472348   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:30.472407   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.476787   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:30.476858   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:30.522290   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:30.522322   61676 cri.go:89] found id: ""
	I0103 20:17:30.522334   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:30.522390   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.526502   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:30.526581   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:30.568301   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.568328   61676 cri.go:89] found id: ""
	I0103 20:17:30.568335   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:30.568382   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.572398   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:30.572454   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:30.611671   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:30.611694   61676 cri.go:89] found id: ""
	I0103 20:17:30.611702   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:30.611749   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.615971   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:30.616035   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:30.658804   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:30.658830   61676 cri.go:89] found id: ""
	I0103 20:17:30.658839   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:30.658889   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.662859   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:30.662930   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:30.705941   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:30.705968   61676 cri.go:89] found id: ""
	I0103 20:17:30.705976   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:30.706031   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.710228   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:30.710308   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:30.749052   61676 cri.go:89] found id: ""
	I0103 20:17:30.749077   61676 logs.go:284] 0 containers: []
	W0103 20:17:30.749088   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:30.749096   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:30.749157   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:30.786239   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.786267   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.786273   61676 cri.go:89] found id: ""
	I0103 20:17:30.786280   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:30.786341   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.790680   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.794294   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:30.794320   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.835916   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:30.835952   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.876225   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:30.876255   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.917657   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:30.917684   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:30.930805   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:30.930831   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:31.060049   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:31.060086   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:31.119725   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:31.119754   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:31.164226   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:31.164261   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:31.204790   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:31.204816   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:31.264949   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:31.264984   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:31.658178   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:31.658217   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:31.712090   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:31.712125   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:31.757333   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:31.757364   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:30.253170   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.753056   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.896380   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.896512   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:35.399775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:34.304692   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:17:34.311338   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:17:34.312603   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:17:34.312624   61676 api_server.go:131] duration metric: took 3.878815002s to wait for apiserver health ...
	I0103 20:17:34.312632   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:17:34.312651   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:34.312705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:34.347683   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.347701   61676 cri.go:89] found id: ""
	I0103 20:17:34.347711   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:34.347769   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.351695   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:34.351773   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:34.386166   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.386188   61676 cri.go:89] found id: ""
	I0103 20:17:34.386197   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:34.386259   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.390352   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:34.390427   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:34.427772   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.427801   61676 cri.go:89] found id: ""
	I0103 20:17:34.427811   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:34.427872   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.432258   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:34.432324   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:34.471746   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.471789   61676 cri.go:89] found id: ""
	I0103 20:17:34.471812   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:34.471878   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.476656   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:34.476729   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:34.514594   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:34.514626   61676 cri.go:89] found id: ""
	I0103 20:17:34.514685   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:34.514779   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.518789   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:34.518849   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:34.555672   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.555698   61676 cri.go:89] found id: ""
	I0103 20:17:34.555707   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:34.555771   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.560278   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:34.560338   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:34.598718   61676 cri.go:89] found id: ""
	I0103 20:17:34.598742   61676 logs.go:284] 0 containers: []
	W0103 20:17:34.598753   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:34.598759   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:34.598810   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:34.635723   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:34.635751   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:34.635758   61676 cri.go:89] found id: ""
	I0103 20:17:34.635767   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:34.635814   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.640466   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.644461   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:34.644490   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:34.659819   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:34.659856   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.697807   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:34.697840   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.745366   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:34.745397   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.804885   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:34.804919   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.848753   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:34.848784   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.891492   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:34.891524   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:35.234093   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:35.234133   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:35.281396   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:35.281425   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:35.317595   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:35.317622   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:35.357552   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:35.357600   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:35.405369   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:35.405394   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:35.459496   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:35.459535   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:38.101844   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:17:38.101870   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.101875   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.101879   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.101886   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.101892   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.101898   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.101907   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.101919   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.101931   61676 system_pods.go:74] duration metric: took 3.789293156s to wait for pod list to return data ...
	I0103 20:17:38.101940   61676 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:17:38.104360   61676 default_sa.go:45] found service account: "default"
	I0103 20:17:38.104386   61676 default_sa.go:55] duration metric: took 2.437157ms for default service account to be created ...
	I0103 20:17:38.104395   61676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:17:38.110198   61676 system_pods.go:86] 8 kube-system pods found
	I0103 20:17:38.110226   61676 system_pods.go:89] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.110233   61676 system_pods.go:89] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.110241   61676 system_pods.go:89] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.110247   61676 system_pods.go:89] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.110254   61676 system_pods.go:89] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.110262   61676 system_pods.go:89] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.110272   61676 system_pods.go:89] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.110287   61676 system_pods.go:89] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.110300   61676 system_pods.go:126] duration metric: took 5.897003ms to wait for k8s-apps to be running ...
	I0103 20:17:38.110310   61676 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:17:38.110359   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:17:38.129025   61676 system_svc.go:56] duration metric: took 18.705736ms WaitForService to wait for kubelet.
	I0103 20:17:38.129071   61676 kubeadm.go:581] duration metric: took 4m20.332460734s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:17:38.129104   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:17:38.132674   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:17:38.132703   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:17:38.132718   61676 node_conditions.go:105] duration metric: took 3.608193ms to run NodePressure ...
	I0103 20:17:38.132803   61676 start.go:228] waiting for startup goroutines ...
	I0103 20:17:38.132830   61676 start.go:233] waiting for cluster config update ...
	I0103 20:17:38.132846   61676 start.go:242] writing updated cluster config ...
	I0103 20:17:38.133198   61676 ssh_runner.go:195] Run: rm -f paused
	I0103 20:17:38.185728   61676 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:17:38.187862   61676 out.go:177] * Done! kubectl is now configured to use "embed-certs-451331" cluster and "default" namespace by default
	I0103 20:17:34.753175   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.254091   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.896317   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:40.396299   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:39.752580   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:41.755418   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:44.253073   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:42.897389   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:45.396646   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:46.253958   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:48.753284   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:47.398164   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:49.895246   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:50.754133   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.253046   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:51.895627   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.897877   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:55.254029   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:57.752707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:56.398655   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:58.897483   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:59.753306   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:01.753500   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:02.255901   62015 pod_ready.go:81] duration metric: took 4m0.010124972s waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:02.255929   62015 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:02.255939   62015 pod_ready.go:38] duration metric: took 4m4.070078749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:02.255957   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:02.255989   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:02.256064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:02.312578   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:02.312606   62015 cri.go:89] found id: ""
	I0103 20:18:02.312616   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:02.312679   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.317969   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:02.318064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:02.361423   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.361451   62015 cri.go:89] found id: ""
	I0103 20:18:02.361464   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:02.361527   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.365691   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:02.365772   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:02.415087   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:02.415118   62015 cri.go:89] found id: ""
	I0103 20:18:02.415128   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:02.415188   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.419409   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:02.419493   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:02.459715   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:02.459744   62015 cri.go:89] found id: ""
	I0103 20:18:02.459754   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:02.459816   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.464105   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:02.464186   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:02.515523   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:02.515547   62015 cri.go:89] found id: ""
	I0103 20:18:02.515556   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:02.515619   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.519586   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:02.519646   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:02.561187   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.561210   62015 cri.go:89] found id: ""
	I0103 20:18:02.561219   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:02.561288   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.566206   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:02.566289   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:02.610993   62015 cri.go:89] found id: ""
	I0103 20:18:02.611019   62015 logs.go:284] 0 containers: []
	W0103 20:18:02.611029   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:02.611036   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:02.611111   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:02.651736   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:02.651764   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:02.651771   62015 cri.go:89] found id: ""
	I0103 20:18:02.651779   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:02.651839   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.656284   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.660614   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:02.660636   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.707759   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:02.707804   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.766498   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:02.766551   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:03.227838   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:03.227884   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:03.269131   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:03.269174   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:03.307383   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:03.307410   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:03.362005   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:03.362043   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:03.412300   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:03.412333   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:03.448896   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:03.448922   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:03.587950   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:03.587982   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:03.629411   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:03.629438   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:03.672468   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:03.672499   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:03.685645   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:03.685682   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:01.395689   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:03.396256   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:06.229417   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:06.244272   62015 api_server.go:72] duration metric: took 4m15.901019711s to wait for apiserver process to appear ...
	I0103 20:18:06.244306   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:06.244351   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:06.244412   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:06.292204   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.292235   62015 cri.go:89] found id: ""
	I0103 20:18:06.292246   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:06.292309   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.296724   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:06.296791   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:06.333984   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.334012   62015 cri.go:89] found id: ""
	I0103 20:18:06.334023   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:06.334079   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.338045   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:06.338123   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:06.374586   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:06.374610   62015 cri.go:89] found id: ""
	I0103 20:18:06.374617   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:06.374669   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.378720   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:06.378792   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:06.416220   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:06.416240   62015 cri.go:89] found id: ""
	I0103 20:18:06.416247   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:06.416300   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.420190   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:06.420247   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:06.458725   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:06.458745   62015 cri.go:89] found id: ""
	I0103 20:18:06.458754   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:06.458808   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.462703   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:06.462759   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:06.504559   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:06.504587   62015 cri.go:89] found id: ""
	I0103 20:18:06.504596   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:06.504659   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.508602   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:06.508662   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:06.559810   62015 cri.go:89] found id: ""
	I0103 20:18:06.559833   62015 logs.go:284] 0 containers: []
	W0103 20:18:06.559840   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:06.559846   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:06.559905   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:06.598672   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.598697   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.598704   62015 cri.go:89] found id: ""
	I0103 20:18:06.598712   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:06.598766   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.602828   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.607033   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:06.607050   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:06.758606   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:06.758634   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.797521   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:06.797552   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:06.856126   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:06.856159   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.902629   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:06.902656   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.953115   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:06.953154   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.993311   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:06.993342   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:07.393614   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:07.393655   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:07.408367   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:07.408397   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:07.446725   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:07.446756   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:07.494564   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:07.494595   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:07.529151   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:07.529176   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:07.577090   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:07.577118   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:05.895682   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:08.395751   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.396488   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.133806   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:18:10.138606   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:18:10.139965   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:18:10.139986   62015 api_server.go:131] duration metric: took 3.895673488s to wait for apiserver health ...
	I0103 20:18:10.140004   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:10.140032   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.140078   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.177309   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.177336   62015 cri.go:89] found id: ""
	I0103 20:18:10.177347   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:10.177398   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.181215   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.181287   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:10.217151   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.217174   62015 cri.go:89] found id: ""
	I0103 20:18:10.217183   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:10.217242   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.221363   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:10.221447   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:10.271359   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:10.271387   62015 cri.go:89] found id: ""
	I0103 20:18:10.271397   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:10.271460   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.277366   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:10.277439   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:10.325567   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:10.325594   62015 cri.go:89] found id: ""
	I0103 20:18:10.325604   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:10.325662   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.331222   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:10.331292   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:10.370488   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:10.370516   62015 cri.go:89] found id: ""
	I0103 20:18:10.370539   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:10.370598   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.375213   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:10.375272   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:10.417606   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.417626   62015 cri.go:89] found id: ""
	I0103 20:18:10.417633   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:10.417678   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.421786   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:10.421848   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:10.459092   62015 cri.go:89] found id: ""
	I0103 20:18:10.459119   62015 logs.go:284] 0 containers: []
	W0103 20:18:10.459129   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:10.459136   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:10.459184   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:10.504845   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:10.504874   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.504879   62015 cri.go:89] found id: ""
	I0103 20:18:10.504886   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:10.504935   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.509189   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.513671   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:10.513692   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.553961   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:10.553988   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:10.606422   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:10.606463   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:10.620647   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:10.620677   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.678322   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:10.678358   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:10.806514   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:10.806569   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:10.862551   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:10.862589   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.917533   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:10.917566   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.988668   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:10.988702   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:11.030485   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.030549   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:11.425651   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:11.425686   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:11.481991   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:11.482019   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:11.526299   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:11.526335   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:14.082821   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:14.082847   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.082853   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.082857   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.082861   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.082865   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.082870   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.082876   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.082881   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.082887   62015 system_pods.go:74] duration metric: took 3.942878112s to wait for pod list to return data ...
	I0103 20:18:14.082893   62015 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:14.087079   62015 default_sa.go:45] found service account: "default"
	I0103 20:18:14.087106   62015 default_sa.go:55] duration metric: took 4.207195ms for default service account to be created ...
	I0103 20:18:14.087115   62015 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:14.094161   62015 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:14.094185   62015 system_pods.go:89] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.094190   62015 system_pods.go:89] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.094195   62015 system_pods.go:89] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.094199   62015 system_pods.go:89] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.094204   62015 system_pods.go:89] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.094208   62015 system_pods.go:89] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.094219   62015 system_pods.go:89] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.094231   62015 system_pods.go:89] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.094244   62015 system_pods.go:126] duration metric: took 7.123869ms to wait for k8s-apps to be running ...
	I0103 20:18:14.094256   62015 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:14.094305   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:14.110365   62015 system_svc.go:56] duration metric: took 16.099582ms WaitForService to wait for kubelet.
	I0103 20:18:14.110400   62015 kubeadm.go:581] duration metric: took 4m23.767155373s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:14.110439   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:14.113809   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:14.113833   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:14.113842   62015 node_conditions.go:105] duration metric: took 3.394645ms to run NodePressure ...
	I0103 20:18:14.113853   62015 start.go:228] waiting for startup goroutines ...
	I0103 20:18:14.113859   62015 start.go:233] waiting for cluster config update ...
	I0103 20:18:14.113868   62015 start.go:242] writing updated cluster config ...
	I0103 20:18:14.114102   62015 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:14.163090   62015 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0103 20:18:14.165173   62015 out.go:177] * Done! kubectl is now configured to use "no-preload-749210" cluster and "default" namespace by default
	I0103 20:18:10.896026   62050 pod_ready.go:81] duration metric: took 4m0.007814497s waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:10.896053   62050 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:10.896062   62050 pod_ready.go:38] duration metric: took 4m4.550955933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:10.896076   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:10.896109   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.896169   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.965458   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:10.965485   62050 cri.go:89] found id: ""
	I0103 20:18:10.965494   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:10.965552   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.970818   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.970890   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:11.014481   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.014509   62050 cri.go:89] found id: ""
	I0103 20:18:11.014537   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:11.014602   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.019157   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:11.019220   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:11.068101   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:11.068129   62050 cri.go:89] found id: ""
	I0103 20:18:11.068138   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:11.068202   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.075018   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:11.075098   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:11.122838   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.122862   62050 cri.go:89] found id: ""
	I0103 20:18:11.122871   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:11.122925   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.128488   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:11.128563   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:11.178133   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:11.178160   62050 cri.go:89] found id: ""
	I0103 20:18:11.178170   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:11.178233   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.182823   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:11.182900   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:11.229175   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.229207   62050 cri.go:89] found id: ""
	I0103 20:18:11.229218   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:11.229271   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.238617   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:11.238686   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:11.289070   62050 cri.go:89] found id: ""
	I0103 20:18:11.289107   62050 logs.go:284] 0 containers: []
	W0103 20:18:11.289115   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:11.289121   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:11.289204   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:11.333342   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.333365   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.333370   62050 cri.go:89] found id: ""
	I0103 20:18:11.333376   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:11.333430   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.338236   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.342643   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:11.342668   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:11.395443   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:11.395471   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:11.561224   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:11.561258   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.619642   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:11.619677   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.656329   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:11.656370   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.710651   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:11.710685   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.756839   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:11.756866   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.791885   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:11.791920   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:11.805161   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.805185   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:12.261916   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:12.261973   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:12.316486   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:12.316525   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:12.367998   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:12.368032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:12.404277   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:12.404316   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:14.943727   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:14.959322   62050 api_server.go:72] duration metric: took 4m14.593955756s to wait for apiserver process to appear ...
	I0103 20:18:14.959344   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:14.959384   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:14.959443   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:15.001580   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.001613   62050 cri.go:89] found id: ""
	I0103 20:18:15.001624   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:15.001688   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.005964   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:15.006044   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:15.043364   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:15.043393   62050 cri.go:89] found id: ""
	I0103 20:18:15.043403   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:15.043461   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.047226   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:15.047291   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:15.091700   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.091727   62050 cri.go:89] found id: ""
	I0103 20:18:15.091736   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:15.091794   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.095953   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:15.096038   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:15.132757   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:15.132785   62050 cri.go:89] found id: ""
	I0103 20:18:15.132796   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:15.132856   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.137574   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:15.137637   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:15.174799   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:15.174827   62050 cri.go:89] found id: ""
	I0103 20:18:15.174836   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:15.174893   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.179052   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:15.179119   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:15.218730   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:15.218761   62050 cri.go:89] found id: ""
	I0103 20:18:15.218770   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:15.218829   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.222730   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:15.222796   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:15.265020   62050 cri.go:89] found id: ""
	I0103 20:18:15.265046   62050 logs.go:284] 0 containers: []
	W0103 20:18:15.265053   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:15.265059   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:15.265122   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:15.307032   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:15.307059   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.307065   62050 cri.go:89] found id: ""
	I0103 20:18:15.307074   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:15.307132   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.311275   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.315089   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:15.315113   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:15.361815   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:15.361840   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:15.493913   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:15.493947   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.553841   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:15.553881   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.590885   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:15.590911   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.630332   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:15.630357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:16.074625   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:16.074659   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:16.133116   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:16.133161   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:16.147559   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:16.147585   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:16.199131   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:16.199167   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:16.238085   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:16.238116   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:16.294992   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:16.295032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:16.333862   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:16.333896   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:18.875707   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:18:18.882546   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:18:18.884633   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:18:18.884662   62050 api_server.go:131] duration metric: took 3.925311693s to wait for apiserver health ...
	I0103 20:18:18.884672   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:18.884701   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:18.884765   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:18.922149   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:18.922170   62050 cri.go:89] found id: ""
	I0103 20:18:18.922177   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:18.922223   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.926886   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:18.926952   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:18.970009   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:18.970035   62050 cri.go:89] found id: ""
	I0103 20:18:18.970043   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:18.970107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.974349   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:18.974413   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:19.016970   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:19.016994   62050 cri.go:89] found id: ""
	I0103 20:18:19.017004   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:19.017069   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.021899   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:19.021959   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:19.076044   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:19.076074   62050 cri.go:89] found id: ""
	I0103 20:18:19.076081   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:19.076134   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.081699   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:19.081775   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:19.120022   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:19.120046   62050 cri.go:89] found id: ""
	I0103 20:18:19.120053   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:19.120107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.124627   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:19.124698   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:19.165431   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:19.165453   62050 cri.go:89] found id: ""
	I0103 20:18:19.165463   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:19.165513   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.170214   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:19.170282   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:19.208676   62050 cri.go:89] found id: ""
	I0103 20:18:19.208706   62050 logs.go:284] 0 containers: []
	W0103 20:18:19.208716   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:19.208724   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:19.208782   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:19.246065   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:19.246092   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:19.246101   62050 cri.go:89] found id: ""
	I0103 20:18:19.246109   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:19.246169   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.250217   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.259598   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:19.259628   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:19.643718   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:19.643755   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:19.697873   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:19.697905   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:19.762995   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:19.763030   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:19.830835   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:19.830871   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:19.969465   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:19.969501   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:20.011269   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:20.011301   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:20.059317   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:20.059352   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:20.099428   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:20.099455   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:20.135773   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:20.135809   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:20.149611   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:20.149649   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:20.190742   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:20.190788   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:20.241115   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:20.241142   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:22.789475   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:22.789502   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.789507   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.789512   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.789516   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.789520   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.789527   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.789533   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.789538   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.789544   62050 system_pods.go:74] duration metric: took 3.904866616s to wait for pod list to return data ...
	I0103 20:18:22.789551   62050 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:22.791976   62050 default_sa.go:45] found service account: "default"
	I0103 20:18:22.792000   62050 default_sa.go:55] duration metric: took 2.444229ms for default service account to be created ...
	I0103 20:18:22.792007   62050 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:22.797165   62050 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:22.797186   62050 system_pods.go:89] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.797192   62050 system_pods.go:89] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.797196   62050 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.797200   62050 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.797204   62050 system_pods.go:89] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.797209   62050 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.797221   62050 system_pods.go:89] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.797234   62050 system_pods.go:89] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.797244   62050 system_pods.go:126] duration metric: took 5.231578ms to wait for k8s-apps to be running ...
	I0103 20:18:22.797256   62050 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:22.797303   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:22.811467   62050 system_svc.go:56] duration metric: took 14.201511ms WaitForService to wait for kubelet.
	I0103 20:18:22.811503   62050 kubeadm.go:581] duration metric: took 4m22.446143128s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:22.811533   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:22.814594   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:22.814617   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:22.814629   62050 node_conditions.go:105] duration metric: took 3.089727ms to run NodePressure ...
	I0103 20:18:22.814639   62050 start.go:228] waiting for startup goroutines ...
	I0103 20:18:22.814645   62050 start.go:233] waiting for cluster config update ...
	I0103 20:18:22.814654   62050 start.go:242] writing updated cluster config ...
	I0103 20:18:22.814897   62050 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:22.864761   62050 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:18:22.866755   62050 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018788" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 20:12:41 UTC, ends at Wed 2024-01-03 20:26:40 UTC. --
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.937587637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5fa94d70-280b-4859-801e-fc65fe5c9dd9 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.938563851Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4dc6ed1d-a43f-4e98-b716-b13c052c4d10 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.939049907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313599939033968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4dc6ed1d-a43f-4e98-b716-b13c052c4d10 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.939624945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aabecff9-a29f-4f75-959d-defa7c6fa3c3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.939693085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aabecff9-a29f-4f75-959d-defa7c6fa3c3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.939980634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312827279257291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-cef5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac00312e7c188202128410fbd7a837dc9109127b647d5402eb8e9662c9af403,PodSandboxId:b651f1b60878ca94ac4fe1055555d60d1750f986c5c3d804b23583d7d7ac9166,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312806973068085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 429c2056-bdb7-4ef4-9e0a-1689542c977e,},Annotations:map[string]string{io.kubernetes.container.hash: a819efdb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b,PodSandboxId:5899d9b99bb80a0595e45a7a5d53017ec4cd2982219645bab2c8d682b07da88b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312803919406082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx6gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4ea161-1a32-4c3b-9a0d-b4c596492d8b,},Annotations:map[string]string{io.kubernetes.container.hash: a0f49294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf,PodSandboxId:ed76d9d3acd8a38a86208b4ddf1aa6c578e079c645aa6a9cdb5cba5f2a036ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312796341925081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f00cf1-
e9c4-442b-a6b3-b633252b840c,},Annotations:map[string]string{io.kubernetes.container.hash: 59f57478,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312796003114553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-ce
f5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d,PodSandboxId:fc7a4a9b7f40330f15b6beedc9ce4706823549eed5d11ada2261689174c6f633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312789595901237,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b202e71ceb565a3c0
d5e1a29eff74660,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523,PodSandboxId:36949c267ab4e5f7d9f22aaf53fc1ad96fcf391487332a1c095b0c79c1ef00ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312789369771905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 63c4c7fb050d98f09cd0c55a15d3f146,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40,PodSandboxId:347a463a5517897350359189bfcd8196e5a4353788e5cdf70557feac357e76c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312789324121741,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb324b9ebe7e80d000d3e5358d033c1a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 17c5f498,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6,PodSandboxId:3023709de312df72460936079c9b7e303b80a5a349e0175a734d680329347254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312788995177952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98fe1c42fefc48f470b8f9db70b8685,},Annotations:map[
string]string{io.kubernetes.container.hash: 8a333982,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aabecff9-a29f-4f75-959d-defa7c6fa3c3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.974288454Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5fbec8a8-2b00-4252-a125-d9b107d9ad15 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.974384756Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5fbec8a8-2b00-4252-a125-d9b107d9ad15 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.979723401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0703adc5-4680-430d-8eda-4448c9bcdb22 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.980217800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313599980191921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0703adc5-4680-430d-8eda-4448c9bcdb22 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.980885269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6071efd0-b3c3-4977-9f9a-0634256f86f1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.981341949Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6071efd0-b3c3-4977-9f9a-0634256f86f1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.981945281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312827279257291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-cef5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac00312e7c188202128410fbd7a837dc9109127b647d5402eb8e9662c9af403,PodSandboxId:b651f1b60878ca94ac4fe1055555d60d1750f986c5c3d804b23583d7d7ac9166,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312806973068085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 429c2056-bdb7-4ef4-9e0a-1689542c977e,},Annotations:map[string]string{io.kubernetes.container.hash: a819efdb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b,PodSandboxId:5899d9b99bb80a0595e45a7a5d53017ec4cd2982219645bab2c8d682b07da88b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312803919406082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx6gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4ea161-1a32-4c3b-9a0d-b4c596492d8b,},Annotations:map[string]string{io.kubernetes.container.hash: a0f49294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf,PodSandboxId:ed76d9d3acd8a38a86208b4ddf1aa6c578e079c645aa6a9cdb5cba5f2a036ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312796341925081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f00cf1-
e9c4-442b-a6b3-b633252b840c,},Annotations:map[string]string{io.kubernetes.container.hash: 59f57478,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312796003114553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-ce
f5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d,PodSandboxId:fc7a4a9b7f40330f15b6beedc9ce4706823549eed5d11ada2261689174c6f633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312789595901237,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b202e71ceb565a3c0
d5e1a29eff74660,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523,PodSandboxId:36949c267ab4e5f7d9f22aaf53fc1ad96fcf391487332a1c095b0c79c1ef00ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312789369771905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 63c4c7fb050d98f09cd0c55a15d3f146,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40,PodSandboxId:347a463a5517897350359189bfcd8196e5a4353788e5cdf70557feac357e76c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312789324121741,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb324b9ebe7e80d000d3e5358d033c1a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 17c5f498,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6,PodSandboxId:3023709de312df72460936079c9b7e303b80a5a349e0175a734d680329347254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312788995177952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98fe1c42fefc48f470b8f9db70b8685,},Annotations:map[
string]string{io.kubernetes.container.hash: 8a333982,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6071efd0-b3c3-4977-9f9a-0634256f86f1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.995157625Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=f58ec70c-cb0c-4cf9-b834-c443777f5caf name=/runtime.v1.ImageService/ListImages
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.995325910Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.995427361Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.995474897Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.995518756Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.995585252Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.995695960Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.995901286Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.995971046Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.996015498Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.996059775Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"" file="storage/storage_transport.go:185"
	Jan 03 20:26:39 embed-certs-451331 crio[714]: time="2024-01-03 20:26:39.996201947Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499 registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb],Size_:127226832,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232],Size_:123261750,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:e3db313c6dbc065
d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.4],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32],Size_:61551410,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,RepoTags:[registry.k8s.io/kube-proxy:v1.28.4],RepoDigests:[registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532],Size_:74749335,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd280
01e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15 registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3],Size_:295456551,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:
[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,RepoTags:[docker.io/kindest/kindnetd:v20230809-80a64d96],RepoDigests:[docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4],Size_:65258016,Uid:nil,Username:,Spec:nil,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Use
rname:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=f58ec70c-cb0c-4cf9-b834-c443777f5caf name=/runtime.v1.ImageService/ListImages
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0ed16e65a5dba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   efd4060c8de3f       storage-provisioner
	3ac00312e7c18       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b651f1b60878c       busybox
	e982a226a7c2e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   5899d9b99bb80       coredns-5dd5756b68-sx6gg
	a076ccb3aaf52       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   ed76d9d3acd8a       kube-proxy-fsnb9
	3c57ed4c58edf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   efd4060c8de3f       storage-provisioner
	91cc8e54c59c4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   fc7a4a9b7f403       kube-scheduler-embed-certs-451331
	8049f81441fd2       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   36949c267ab4e       kube-controller-manager-embed-certs-451331
	d5b2310ec90e1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   347a463a55178       etcd-embed-certs-451331
	b43e6c342d85d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   3023709de312d       kube-apiserver-embed-certs-451331
	
	
	==> coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43615 - 38443 "HINFO IN 5833282349375032069.6189678721608338515. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00613889s
	
	
	==> describe nodes <==
	Name:               embed-certs-451331
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-451331
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=embed-certs-451331
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_04_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:04:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-451331
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:26:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:23:57 +0000   Wed, 03 Jan 2024 20:04:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:23:57 +0000   Wed, 03 Jan 2024 20:04:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:23:57 +0000   Wed, 03 Jan 2024 20:04:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:23:57 +0000   Wed, 03 Jan 2024 20:13:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.197
	  Hostname:    embed-certs-451331
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 43999723e8714a46b9bb7ee411ed1129
	  System UUID:                43999723-e871-4a46-b9bb-7ee411ed1129
	  Boot ID:                    3cd38969-9396-4492-a5a4-e874524061f1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-sx6gg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-451331                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-451331             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-451331    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-fsnb9                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-451331             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-sm8rb               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-451331 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-451331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-451331 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node embed-certs-451331 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-451331 event: Registered Node embed-certs-451331 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-451331 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-451331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-451331 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-451331 event: Registered Node embed-certs-451331 in Controller
	
	
	==> dmesg <==
	[Jan 3 20:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062573] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.332954] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.318945] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.129871] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.546459] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.045258] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.112979] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.153194] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.116447] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.226191] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Jan 3 20:13] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[ +15.499205] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] <==
	{"level":"info","ts":"2024-01-03T20:13:11.276006Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-03T20:13:11.276155Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.197:2380"}
	{"level":"info","ts":"2024-01-03T20:13:11.276311Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.197:2380"}
	{"level":"info","ts":"2024-01-03T20:13:11.279296Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-03T20:13:11.279244Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"37309ea842b3f618","initial-advertise-peer-urls":["https://192.168.50.197:2380"],"listen-peer-urls":["https://192.168.50.197:2380"],"advertise-client-urls":["https://192.168.50.197:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.197:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-03T20:13:12.738225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-03T20:13:12.738361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-03T20:13:12.738402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 received MsgPreVoteResp from 37309ea842b3f618 at term 2"}
	{"level":"info","ts":"2024-01-03T20:13:12.738428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 became candidate at term 3"}
	{"level":"info","ts":"2024-01-03T20:13:12.738437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 received MsgVoteResp from 37309ea842b3f618 at term 3"}
	{"level":"info","ts":"2024-01-03T20:13:12.738449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 became leader at term 3"}
	{"level":"info","ts":"2024-01-03T20:13:12.738459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 37309ea842b3f618 elected leader 37309ea842b3f618 at term 3"}
	{"level":"info","ts":"2024-01-03T20:13:12.741273Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"37309ea842b3f618","local-member-attributes":"{Name:embed-certs-451331 ClientURLs:[https://192.168.50.197:2379]}","request-path":"/0/members/37309ea842b3f618/attributes","cluster-id":"b82d2d0acaa655b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T20:13:12.741325Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:13:12.741588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T20:13:12.741662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T20:13:12.741714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:13:12.742523Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T20:13:12.743382Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.197:2379"}
	{"level":"info","ts":"2024-01-03T20:13:16.312211Z","caller":"traceutil/trace.go:171","msg":"trace[196740903] transaction","detail":"{read_only:false; number_of_response:0; response_revision:493; }","duration":"101.15089ms","start":"2024-01-03T20:13:16.211039Z","end":"2024-01-03T20:13:16.31219Z","steps":["trace[196740903] 'process raft request'  (duration: 101.09332ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:27.541586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.306999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-sm8rb\" ","response":"range_response_count:1 size:4071"}
	{"level":"info","ts":"2024-01-03T20:13:27.541685Z","caller":"traceutil/trace.go:171","msg":"trace[1976195513] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-sm8rb; range_end:; response_count:1; response_revision:589; }","duration":"119.452704ms","start":"2024-01-03T20:13:27.422218Z","end":"2024-01-03T20:13:27.541671Z","steps":["trace[1976195513] 'range keys from in-memory index tree'  (duration: 119.067249ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T20:23:12.783931Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":833}
	{"level":"info","ts":"2024-01-03T20:23:12.787276Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":833,"took":"2.323476ms","hash":845675121}
	{"level":"info","ts":"2024-01-03T20:23:12.787369Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":845675121,"revision":833,"compact-revision":-1}
	
	
	==> kernel <==
	 20:26:40 up 14 min,  0 users,  load average: 0.07, 0.20, 0.17
	Linux embed-certs-451331 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] <==
	I0103 20:23:14.462248       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:23:15.462626       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:23:15.462685       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:23:15.462700       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:23:15.462824       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:23:15.462878       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:23:15.463858       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:24:14.352427       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:24:15.463952       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:24:15.464174       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:24:15.464209       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:24:15.464302       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:24:15.464386       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:24:15.466212       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:25:14.352491       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0103 20:26:14.353044       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:26:15.464339       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:26:15.464411       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:26:15.464420       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:26:15.466767       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:26:15.466989       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:26:15.467024       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] <==
	I0103 20:20:57.936059       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:21:27.452060       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:21:27.946152       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:21:57.458611       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:21:57.955491       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:22:27.466210       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:22:27.963099       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:22:57.471342       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:22:57.971952       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:23:27.481657       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:23:27.988934       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:23:57.487986       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:23:57.997691       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:24:27.063864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="441.127µs"
	E0103 20:24:27.494338       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:24:28.011585       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:24:41.063184       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="188.967µs"
	E0103 20:24:57.500315       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:24:58.029022       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:25:27.512373       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:25:28.038598       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:25:57.518345       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:25:58.051843       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:26:27.524489       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:26:28.061195       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] <==
	I0103 20:13:16.668211       1 server_others.go:69] "Using iptables proxy"
	I0103 20:13:16.683293       1 node.go:141] Successfully retrieved node IP: 192.168.50.197
	I0103 20:13:16.739930       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0103 20:13:16.740003       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 20:13:16.742934       1 server_others.go:152] "Using iptables Proxier"
	I0103 20:13:16.743012       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 20:13:16.743237       1 server.go:846] "Version info" version="v1.28.4"
	I0103 20:13:16.743280       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:16.744157       1 config.go:188] "Starting service config controller"
	I0103 20:13:16.744211       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 20:13:16.744244       1 config.go:97] "Starting endpoint slice config controller"
	I0103 20:13:16.744260       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 20:13:16.746071       1 config.go:315] "Starting node config controller"
	I0103 20:13:16.746113       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 20:13:16.844382       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 20:13:16.844484       1 shared_informer.go:318] Caches are synced for service config
	I0103 20:13:16.847046       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] <==
	I0103 20:13:11.666382       1 serving.go:348] Generated self-signed cert in-memory
	W0103 20:13:14.432179       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:13:14.432269       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:13:14.432300       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:13:14.432323       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:13:14.454344       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0103 20:13:14.454390       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:14.456408       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:13:14.456532       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:13:14.458329       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:13:14.458401       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:13:14.557687       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 20:12:41 UTC, ends at Wed 2024-01-03 20:26:40 UTC. --
	Jan 03 20:24:08 embed-certs-451331 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:24:08 embed-certs-451331 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:24:08 embed-certs-451331 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:24:12 embed-certs-451331 kubelet[920]: E0103 20:24:12.061004     920 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 03 20:24:12 embed-certs-451331 kubelet[920]: E0103 20:24:12.061055     920 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 03 20:24:12 embed-certs-451331 kubelet[920]: E0103 20:24:12.061299     920 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sd9td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-sm8rb_kube-system(12b9f83d-abf8-431c-a271-b8489d32f0de): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 03 20:24:12 embed-certs-451331 kubelet[920]: E0103 20:24:12.061343     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:24:27 embed-certs-451331 kubelet[920]: E0103 20:24:27.045952     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:24:41 embed-certs-451331 kubelet[920]: E0103 20:24:41.045924     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:24:56 embed-certs-451331 kubelet[920]: E0103 20:24:56.046219     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:25:08 embed-certs-451331 kubelet[920]: E0103 20:25:08.073492     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:25:08 embed-certs-451331 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:25:08 embed-certs-451331 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:25:08 embed-certs-451331 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:25:11 embed-certs-451331 kubelet[920]: E0103 20:25:11.045645     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:25:24 embed-certs-451331 kubelet[920]: E0103 20:25:24.046603     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:25:36 embed-certs-451331 kubelet[920]: E0103 20:25:36.048016     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:25:49 embed-certs-451331 kubelet[920]: E0103 20:25:49.045821     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:26:00 embed-certs-451331 kubelet[920]: E0103 20:26:00.045902     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:26:08 embed-certs-451331 kubelet[920]: E0103 20:26:08.072493     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:26:08 embed-certs-451331 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:26:08 embed-certs-451331 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:26:08 embed-certs-451331 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:26:15 embed-certs-451331 kubelet[920]: E0103 20:26:15.044998     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:26:27 embed-certs-451331 kubelet[920]: E0103 20:26:27.045527     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	
	
	==> storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] <==
	I0103 20:13:47.454346       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:13:47.469076       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:13:47.469201       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:14:04.879494       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:14:04.879712       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-451331_24b7586b-b269-4a34-a6ee-21fcdf43cedc!
	I0103 20:14:04.881294       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7f931a4b-3ae8-49f4-84c3-558c77e6b271", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-451331_24b7586b-b269-4a34-a6ee-21fcdf43cedc became leader
	I0103 20:14:04.980387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-451331_24b7586b-b269-4a34-a6ee-21fcdf43cedc!
	
	
	==> storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] <==
	I0103 20:13:16.508894       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0103 20:13:46.517331       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-451331 -n embed-certs-451331
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-451331 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-sm8rb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-451331 describe pod metrics-server-57f55c9bc5-sm8rb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-451331 describe pod metrics-server-57f55c9bc5-sm8rb: exit status 1 (65.241271ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-sm8rb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-451331 describe pod metrics-server-57f55c9bc5-sm8rb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-749210 -n no-preload-749210
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-03 20:27:14.764921541 +0000 UTC m=+5400.237498550
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-749210 -n no-preload-749210
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-749210 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-749210 logs -n 25: (1.600043832s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-719541 sudo cat                              | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo find                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo crio                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-719541                                       | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-350596 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | disable-driver-mounts-350596                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:06 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-927922        | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451331            | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-749210             | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018788  | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-927922             | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC | 03 Jan 24 20:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451331                 | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC | 03 Jan 24 20:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-749210                  | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018788       | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:09:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:09:05.502375   62050 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:09:05.502548   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502558   62050 out.go:309] Setting ErrFile to fd 2...
	I0103 20:09:05.502566   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502759   62050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:09:05.503330   62050 out.go:303] Setting JSON to false
	I0103 20:09:05.504222   62050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6693,"bootTime":1704305853,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 20:09:05.504283   62050 start.go:138] virtualization: kvm guest
	I0103 20:09:05.507002   62050 out.go:177] * [default-k8s-diff-port-018788] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 20:09:05.508642   62050 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:09:05.508667   62050 notify.go:220] Checking for updates...
	I0103 20:09:05.510296   62050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:09:05.511927   62050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:09:05.513487   62050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:09:05.515064   62050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 20:09:05.516515   62050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:09:05.518301   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:09:05.518774   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.518827   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.533730   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0103 20:09:05.534098   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.534667   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.534699   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.535027   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.535298   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.535543   62050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:09:05.535823   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.535855   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.549808   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I0103 20:09:05.550147   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.550708   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.550733   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.551041   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.551258   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.583981   62050 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 20:09:05.585560   62050 start.go:298] selected driver: kvm2
	I0103 20:09:05.585580   62050 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.585707   62050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:09:05.586411   62050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.586494   62050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 20:09:05.601346   62050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 20:09:05.601747   62050 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 20:09:05.601812   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:09:05.601828   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:09:05.601839   62050 start_flags.go:323] config:
	{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-01878
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.602011   62050 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.604007   62050 out.go:177] * Starting control plane node default-k8s-diff-port-018788 in cluster default-k8s-diff-port-018788
	I0103 20:09:03.174819   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:06.246788   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:04.840696   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:09:04.840826   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:09:04.840950   62015 cache.go:107] acquiring lock: {Name:mk76774936d94ce826f83ee0faaaf3557831e6bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.840994   62015 cache.go:107] acquiring lock: {Name:mk25b47a2b083e99837dbc206b0832b20d7da669 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841017   62015 cache.go:107] acquiring lock: {Name:mk0a26120b5274bc796f1ae286da54dda262a5a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841059   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0103 20:09:04.841064   62015 start.go:365] acquiring machines lock for no-preload-749210: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:04.841070   62015 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 128.344µs
	I0103 20:09:04.841078   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0103 20:09:04.841081   62015 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841085   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0103 20:09:04.840951   62015 cache.go:107] acquiring lock: {Name:mk372d2259ddc4c784d2a14a7416ba9b749d6f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841089   62015 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 97.811µs
	I0103 20:09:04.841093   62015 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 87.964µs
	I0103 20:09:04.841108   62015 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0103 20:09:04.841109   62015 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0103 20:09:04.841115   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 20:09:04.841052   62015 cache.go:107] acquiring lock: {Name:mk04d21d7cdef9332755ef804a44022ba9c4a8c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841129   62015 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 185.143µs
	I0103 20:09:04.841155   62015 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 20:09:04.841139   62015 cache.go:107] acquiring lock: {Name:mk5c34e1c9b00efde01e776962411ad1105596ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841183   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0103 20:09:04.841203   62015 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 176.832µs
	I0103 20:09:04.841212   62015 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0103 20:09:04.841400   62015 cache.go:107] acquiring lock: {Name:mk0ae9e390d74a93289bc4e45b5511dce57beeb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841216   62015 cache.go:107] acquiring lock: {Name:mkccb08ee6224be0e6786052f4bebc8d21ec8a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841614   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0103 20:09:04.841633   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0103 20:09:04.841675   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0103 20:09:04.841679   62015 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 497.325µs
	I0103 20:09:04.841672   62015 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 557.891µs
	I0103 20:09:04.841716   62015 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841696   62015 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 499.205µs
	I0103 20:09:04.841745   62015 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841706   62015 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841755   62015 cache.go:87] Successfully saved all images to host disk.
	I0103 20:09:05.605517   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:09:05.605574   62050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 20:09:05.605590   62050 cache.go:56] Caching tarball of preloaded images
	I0103 20:09:05.605669   62050 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 20:09:05.605681   62050 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:09:05.605787   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:09:05.605973   62050 start.go:365] acquiring machines lock for default-k8s-diff-port-018788: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:12.326805   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:15.398807   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:21.478760   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:24.550821   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:30.630841   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:33.702766   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:39.782732   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:42.854926   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:48.934815   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:52.006845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:58.086804   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:01.158903   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:07.238808   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:10.310897   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:16.390869   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:19.462833   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:25.542866   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:28.614753   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:34.694867   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:37.766876   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:43.846838   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:46.918843   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:52.998853   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:56.070822   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:02.150825   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:05.222884   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:11.302787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:14.374818   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:20.454810   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:23.526899   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:29.606842   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:32.678789   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:38.758787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:41.830855   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:47.910801   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:50.982868   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:57.062889   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:00.134834   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:06.214856   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:09.286845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:15.366787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:18.438756   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:24.518814   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:27.590887   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:30.594981   61676 start.go:369] acquired machines lock for "embed-certs-451331" in 3m56.986277612s
	I0103 20:12:30.595030   61676 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:30.595039   61676 fix.go:54] fixHost starting: 
	I0103 20:12:30.595434   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:30.595466   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:30.609917   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0103 20:12:30.610302   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:30.610819   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:12:30.610845   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:30.611166   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:30.611348   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:30.611486   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:12:30.613108   61676 fix.go:102] recreateIfNeeded on embed-certs-451331: state=Stopped err=<nil>
	I0103 20:12:30.613128   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	W0103 20:12:30.613291   61676 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:30.615194   61676 out.go:177] * Restarting existing kvm2 VM for "embed-certs-451331" ...
	I0103 20:12:30.592855   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:30.592889   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:12:30.594843   61400 machine.go:91] provisioned docker machine in 4m37.406324683s
	I0103 20:12:30.594886   61400 fix.go:56] fixHost completed within 4m37.42774841s
	I0103 20:12:30.594892   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 4m37.427764519s
	W0103 20:12:30.594913   61400 start.go:694] error starting host: provision: host is not running
	W0103 20:12:30.595005   61400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0103 20:12:30.595014   61400 start.go:709] Will try again in 5 seconds ...
	I0103 20:12:30.616365   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Start
	I0103 20:12:30.616513   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring networks are active...
	I0103 20:12:30.617380   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network default is active
	I0103 20:12:30.617718   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network mk-embed-certs-451331 is active
	I0103 20:12:30.618103   61676 main.go:141] libmachine: (embed-certs-451331) Getting domain xml...
	I0103 20:12:30.618735   61676 main.go:141] libmachine: (embed-certs-451331) Creating domain...
	I0103 20:12:31.839751   61676 main.go:141] libmachine: (embed-certs-451331) Waiting to get IP...
	I0103 20:12:31.840608   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:31.841035   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:31.841117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:31.841008   62575 retry.go:31] will retry after 303.323061ms: waiting for machine to come up
	I0103 20:12:32.146508   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.147005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.147037   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.146950   62575 retry.go:31] will retry after 240.92709ms: waiting for machine to come up
	I0103 20:12:32.389487   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.389931   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.389962   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.389887   62575 retry.go:31] will retry after 473.263026ms: waiting for machine to come up
	I0103 20:12:32.864624   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.865060   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.865082   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.865006   62575 retry.go:31] will retry after 473.373684ms: waiting for machine to come up
	I0103 20:12:33.339691   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.340156   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.340189   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.340098   62575 retry.go:31] will retry after 639.850669ms: waiting for machine to come up
	I0103 20:12:35.596669   61400 start.go:365] acquiring machines lock for old-k8s-version-927922: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:12:33.982104   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.982622   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.982655   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.982583   62575 retry.go:31] will retry after 589.282725ms: waiting for machine to come up
	I0103 20:12:34.573280   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:34.573692   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:34.573716   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:34.573639   62575 retry.go:31] will retry after 884.387817ms: waiting for machine to come up
	I0103 20:12:35.459819   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:35.460233   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:35.460287   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:35.460168   62575 retry.go:31] will retry after 1.326571684s: waiting for machine to come up
	I0103 20:12:36.788923   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:36.789429   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:36.789452   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:36.789395   62575 retry.go:31] will retry after 1.436230248s: waiting for machine to come up
	I0103 20:12:38.227994   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:38.228374   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:38.228397   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:38.228336   62575 retry.go:31] will retry after 2.127693351s: waiting for machine to come up
	I0103 20:12:40.358485   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:40.358968   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:40.358998   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:40.358912   62575 retry.go:31] will retry after 1.816116886s: waiting for machine to come up
	I0103 20:12:42.177782   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:42.178359   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:42.178390   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:42.178296   62575 retry.go:31] will retry after 3.199797073s: waiting for machine to come up
	I0103 20:12:45.381712   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:45.382053   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:45.382075   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:45.381991   62575 retry.go:31] will retry after 3.573315393s: waiting for machine to come up
	I0103 20:12:50.159164   62015 start.go:369] acquired machines lock for "no-preload-749210" in 3m45.318070652s
	I0103 20:12:50.159226   62015 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:50.159235   62015 fix.go:54] fixHost starting: 
	I0103 20:12:50.159649   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:50.159688   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:50.176573   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0103 20:12:50.176998   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:50.177504   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:12:50.177529   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:50.177925   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:50.178125   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:12:50.178297   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:12:50.179850   62015 fix.go:102] recreateIfNeeded on no-preload-749210: state=Stopped err=<nil>
	I0103 20:12:50.179873   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	W0103 20:12:50.180066   62015 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:50.182450   62015 out.go:177] * Restarting existing kvm2 VM for "no-preload-749210" ...
	I0103 20:12:48.959159   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.959637   61676 main.go:141] libmachine: (embed-certs-451331) Found IP for machine: 192.168.50.197
	I0103 20:12:48.959655   61676 main.go:141] libmachine: (embed-certs-451331) Reserving static IP address...
	I0103 20:12:48.959666   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has current primary IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.960051   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.960073   61676 main.go:141] libmachine: (embed-certs-451331) DBG | skip adding static IP to network mk-embed-certs-451331 - found existing host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"}
	I0103 20:12:48.960086   61676 main.go:141] libmachine: (embed-certs-451331) Reserved static IP address: 192.168.50.197
	I0103 20:12:48.960101   61676 main.go:141] libmachine: (embed-certs-451331) Waiting for SSH to be available...
	I0103 20:12:48.960117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Getting to WaitForSSH function...
	I0103 20:12:48.962160   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962443   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.962478   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962611   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH client type: external
	I0103 20:12:48.962631   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa (-rw-------)
	I0103 20:12:48.962661   61676 main.go:141] libmachine: (embed-certs-451331) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:12:48.962681   61676 main.go:141] libmachine: (embed-certs-451331) DBG | About to run SSH command:
	I0103 20:12:48.962718   61676 main.go:141] libmachine: (embed-certs-451331) DBG | exit 0
	I0103 20:12:49.058790   61676 main.go:141] libmachine: (embed-certs-451331) DBG | SSH cmd err, output: <nil>: 
	I0103 20:12:49.059176   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetConfigRaw
	I0103 20:12:49.059838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.062025   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062407   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.062440   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062697   61676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/config.json ...
	I0103 20:12:49.062878   61676 machine.go:88] provisioning docker machine ...
	I0103 20:12:49.062894   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.063097   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063258   61676 buildroot.go:166] provisioning hostname "embed-certs-451331"
	I0103 20:12:49.063278   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063423   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.065735   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066121   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.066161   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066328   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.066507   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066695   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066860   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.067065   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.067455   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.067469   61676 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-451331 && echo "embed-certs-451331" | sudo tee /etc/hostname
	I0103 20:12:49.210431   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-451331
	
	I0103 20:12:49.210465   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.213162   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213503   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.213573   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213682   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.213911   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214094   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214289   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.214449   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.214837   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.214856   61676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-451331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-451331/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-451331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:12:49.350098   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:49.350134   61676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:12:49.350158   61676 buildroot.go:174] setting up certificates
	I0103 20:12:49.350172   61676 provision.go:83] configureAuth start
	I0103 20:12:49.350188   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.350497   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.352947   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353356   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.353387   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353448   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.355701   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.356033   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356183   61676 provision.go:138] copyHostCerts
	I0103 20:12:49.356241   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:12:49.356254   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:12:49.356322   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:12:49.356413   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:12:49.356421   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:12:49.356446   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:12:49.356506   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:12:49.356513   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:12:49.356535   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:12:49.356587   61676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.embed-certs-451331 san=[192.168.50.197 192.168.50.197 localhost 127.0.0.1 minikube embed-certs-451331]
	I0103 20:12:49.413721   61676 provision.go:172] copyRemoteCerts
	I0103 20:12:49.413781   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:12:49.413804   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.416658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417143   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.417170   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417420   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.417617   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.417814   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.417977   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.510884   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:12:49.533465   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0103 20:12:49.554895   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:12:49.576069   61676 provision.go:86] duration metric: configureAuth took 225.882364ms
	I0103 20:12:49.576094   61676 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:12:49.576310   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:12:49.576387   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.579119   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579413   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.579461   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.579780   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.579968   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.580121   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.580271   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.580591   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.580615   61676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:12:49.883159   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:12:49.883188   61676 machine.go:91] provisioned docker machine in 820.299871ms
	I0103 20:12:49.883199   61676 start.go:300] post-start starting for "embed-certs-451331" (driver="kvm2")
	I0103 20:12:49.883212   61676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:12:49.883239   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.883565   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:12:49.883599   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.886365   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.886695   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886878   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.887091   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.887293   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.887468   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.985529   61676 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:12:49.989732   61676 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:12:49.989758   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:12:49.989820   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:12:49.989891   61676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:12:49.989981   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:12:49.999882   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:50.022936   61676 start.go:303] post-start completed in 139.710189ms
	I0103 20:12:50.022966   61676 fix.go:56] fixHost completed within 19.427926379s
	I0103 20:12:50.023002   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.025667   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.025940   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.025973   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.026212   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.026424   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.027074   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:50.027381   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:50.027393   61676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:12:50.159031   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312770.110466062
	
	I0103 20:12:50.159053   61676 fix.go:206] guest clock: 1704312770.110466062
	I0103 20:12:50.159061   61676 fix.go:219] Guest: 2024-01-03 20:12:50.110466062 +0000 UTC Remote: 2024-01-03 20:12:50.022969488 +0000 UTC m=+256.568741537 (delta=87.496574ms)
	I0103 20:12:50.159083   61676 fix.go:190] guest clock delta is within tolerance: 87.496574ms
	I0103 20:12:50.159089   61676 start.go:83] releasing machines lock for "embed-certs-451331", held for 19.564082089s
	I0103 20:12:50.159117   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.159421   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:50.162216   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162550   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.162577   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162762   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163248   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163433   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163532   61676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:12:50.163583   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.163644   61676 ssh_runner.go:195] Run: cat /version.json
	I0103 20:12:50.163671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.166588   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166753   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166957   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.166987   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167192   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167329   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.167358   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167362   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167500   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.167590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167684   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.167761   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167905   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.168096   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.298482   61676 ssh_runner.go:195] Run: systemctl --version
	I0103 20:12:50.304252   61676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:12:50.442709   61676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:12:50.448879   61676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:12:50.448959   61676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:12:50.467183   61676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:12:50.467203   61676 start.go:475] detecting cgroup driver to use...
	I0103 20:12:50.467269   61676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:12:50.482438   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:12:50.493931   61676 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:12:50.493997   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:12:50.506860   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:12:50.519279   61676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:12:50.627391   61676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:12:50.748160   61676 docker.go:219] disabling docker service ...
	I0103 20:12:50.748220   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:12:50.760970   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:12:50.772252   61676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:12:50.889707   61676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:12:51.003794   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:12:51.016226   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:12:51.032543   61676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:12:51.032600   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.042477   61676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:12:51.042559   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.053103   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.063469   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.073912   61676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:12:51.083314   61676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:12:51.092920   61676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:12:51.092969   61676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:12:51.106690   61676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:12:51.115815   61676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:12:51.230139   61676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:12:51.413184   61676 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:12:51.413315   61676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:12:51.417926   61676 start.go:543] Will wait 60s for crictl version
	I0103 20:12:51.417988   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:12:51.421507   61676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:12:51.465370   61676 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:12:51.465453   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.519590   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.582633   61676 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:12:51.583888   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:51.587068   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587442   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:51.587486   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587724   61676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0103 20:12:51.591798   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:51.602798   61676 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:12:51.602871   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:51.641736   61676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:12:51.641799   61676 ssh_runner.go:195] Run: which lz4
	I0103 20:12:51.645386   61676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:12:51.649168   61676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:12:51.649196   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:12:53.428537   61676 crio.go:444] Took 1.783185 seconds to copy over tarball
	I0103 20:12:53.428601   61676 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:12:50.183891   62015 main.go:141] libmachine: (no-preload-749210) Calling .Start
	I0103 20:12:50.184083   62015 main.go:141] libmachine: (no-preload-749210) Ensuring networks are active...
	I0103 20:12:50.184749   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network default is active
	I0103 20:12:50.185084   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network mk-no-preload-749210 is active
	I0103 20:12:50.185435   62015 main.go:141] libmachine: (no-preload-749210) Getting domain xml...
	I0103 20:12:50.186067   62015 main.go:141] libmachine: (no-preload-749210) Creating domain...
	I0103 20:12:51.468267   62015 main.go:141] libmachine: (no-preload-749210) Waiting to get IP...
	I0103 20:12:51.469108   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.469584   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.469664   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.469570   62702 retry.go:31] will retry after 254.191618ms: waiting for machine to come up
	I0103 20:12:51.724958   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.725657   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.725683   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.725609   62702 retry.go:31] will retry after 279.489548ms: waiting for machine to come up
	I0103 20:12:52.007176   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.007682   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.007713   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.007628   62702 retry.go:31] will retry after 422.96552ms: waiting for machine to come up
	I0103 20:12:52.432345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.432873   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.432912   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.432844   62702 retry.go:31] will retry after 561.295375ms: waiting for machine to come up
	I0103 20:12:52.995438   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.995929   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.995963   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.995878   62702 retry.go:31] will retry after 547.962782ms: waiting for machine to come up
	I0103 20:12:53.545924   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:53.546473   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:53.546558   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:53.546453   62702 retry.go:31] will retry after 927.631327ms: waiting for machine to come up
	I0103 20:12:54.475549   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:54.476000   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:54.476046   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:54.475945   62702 retry.go:31] will retry after 880.192703ms: waiting for machine to come up
	I0103 20:12:56.224357   61676 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.795734066s)
	I0103 20:12:56.224386   61676 crio.go:451] Took 2.795820 seconds to extract the tarball
	I0103 20:12:56.224406   61676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:12:56.266955   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:56.318766   61676 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:12:56.318789   61676 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:12:56.318871   61676 ssh_runner.go:195] Run: crio config
	I0103 20:12:56.378376   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:12:56.378401   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:12:56.378423   61676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:12:56.378451   61676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.197 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-451331 NodeName:embed-certs-451331 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:12:56.378619   61676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-451331"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:12:56.378714   61676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-451331 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:12:56.378777   61676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:12:56.387967   61676 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:12:56.388037   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:12:56.396000   61676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0103 20:12:56.411880   61676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:12:56.427567   61676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0103 20:12:56.443342   61676 ssh_runner.go:195] Run: grep 192.168.50.197	control-plane.minikube.internal$ /etc/hosts
	I0103 20:12:56.446991   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:56.458659   61676 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331 for IP: 192.168.50.197
	I0103 20:12:56.458696   61676 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:12:56.458844   61676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:12:56.458904   61676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:12:56.459010   61676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/client.key
	I0103 20:12:56.459092   61676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key.d719e12a
	I0103 20:12:56.459159   61676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key
	I0103 20:12:56.459299   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:12:56.459341   61676 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:12:56.459358   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:12:56.459400   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:12:56.459434   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:12:56.459466   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:12:56.459522   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:56.460408   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:12:56.481997   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:12:56.504016   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:12:56.526477   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:12:56.548471   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:12:56.570763   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:12:56.592910   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:12:56.617765   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:12:56.646025   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:12:56.668629   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:12:56.690927   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:12:56.712067   61676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:12:56.727773   61676 ssh_runner.go:195] Run: openssl version
	I0103 20:12:56.733000   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:12:56.742921   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747499   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747562   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.752732   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:12:56.762510   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:12:56.772401   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777123   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777180   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.782490   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:12:56.793745   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:12:56.805156   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809897   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809954   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.815432   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:12:56.826498   61676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:12:56.831012   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:12:56.837150   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:12:56.843256   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:12:56.849182   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:12:56.854882   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:12:56.862018   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:12:56.867863   61676 kubeadm.go:404] StartCluster: {Name:embed-certs-451331 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:12:56.867982   61676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:12:56.868029   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:12:56.909417   61676 cri.go:89] found id: ""
	I0103 20:12:56.909523   61676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:12:56.919487   61676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:12:56.919515   61676 kubeadm.go:636] restartCluster start
	I0103 20:12:56.919568   61676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:12:56.929137   61676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:56.930326   61676 kubeconfig.go:92] found "embed-certs-451331" server: "https://192.168.50.197:8443"
	I0103 20:12:56.932682   61676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:12:56.941846   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:56.941909   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:56.953616   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.442188   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.442281   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.458303   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.942905   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.942988   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.955860   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:58.442326   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.442420   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.454294   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:55.357897   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:55.358462   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:55.358492   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:55.358429   62702 retry.go:31] will retry after 1.158958207s: waiting for machine to come up
	I0103 20:12:56.518837   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:56.519260   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:56.519306   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:56.519224   62702 retry.go:31] will retry after 1.620553071s: waiting for machine to come up
	I0103 20:12:58.141980   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:58.142505   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:58.142549   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:58.142454   62702 retry.go:31] will retry after 1.525068593s: waiting for machine to come up
	I0103 20:12:59.670380   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:59.670880   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:59.670909   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:59.670827   62702 retry.go:31] will retry after 1.772431181s: waiting for machine to come up
	I0103 20:12:58.942887   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.942975   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.956781   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.442313   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.442402   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.455837   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.942355   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.942439   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.954326   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.441870   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.441960   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.454004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.941882   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.941995   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.958004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.442573   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.442664   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.458604   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.942062   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.942170   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.958396   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.442928   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.443027   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.456612   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.941943   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.942056   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.953939   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:03.442552   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.442633   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.454840   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.445221   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:01.445608   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:01.445647   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:01.445565   62702 retry.go:31] will retry after 2.830747633s: waiting for machine to come up
	I0103 20:13:04.279514   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:04.279996   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:04.280020   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:04.279963   62702 retry.go:31] will retry after 4.03880385s: waiting for machine to come up
	I0103 20:13:03.942687   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.942774   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.954714   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.442265   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.442357   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.454216   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.942877   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.942952   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.954944   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.442467   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.442596   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.454305   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.942383   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.942468   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.954074   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.442723   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.442811   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.454629   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.942200   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.942283   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.953799   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.953829   61676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:06.953836   61676 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:06.953845   61676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:06.953904   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:06.989109   61676 cri.go:89] found id: ""
	I0103 20:13:06.989214   61676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:07.004822   61676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:07.014393   61676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:07.014454   61676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023669   61676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023691   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.139277   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.626388   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.814648   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.901750   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.962623   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:07.962710   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.463820   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.322801   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323160   62015 main.go:141] libmachine: (no-preload-749210) Found IP for machine: 192.168.61.245
	I0103 20:13:08.323203   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has current primary IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323222   62015 main.go:141] libmachine: (no-preload-749210) Reserving static IP address...
	I0103 20:13:08.323600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.323632   62015 main.go:141] libmachine: (no-preload-749210) Reserved static IP address: 192.168.61.245
	I0103 20:13:08.323664   62015 main.go:141] libmachine: (no-preload-749210) DBG | skip adding static IP to network mk-no-preload-749210 - found existing host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"}
	I0103 20:13:08.323684   62015 main.go:141] libmachine: (no-preload-749210) DBG | Getting to WaitForSSH function...
	I0103 20:13:08.323698   62015 main.go:141] libmachine: (no-preload-749210) Waiting for SSH to be available...
	I0103 20:13:08.325529   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325831   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.325863   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325949   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH client type: external
	I0103 20:13:08.325977   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa (-rw-------)
	I0103 20:13:08.326013   62015 main.go:141] libmachine: (no-preload-749210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:08.326030   62015 main.go:141] libmachine: (no-preload-749210) DBG | About to run SSH command:
	I0103 20:13:08.326053   62015 main.go:141] libmachine: (no-preload-749210) DBG | exit 0
	I0103 20:13:08.418368   62015 main.go:141] libmachine: (no-preload-749210) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:08.418718   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetConfigRaw
	I0103 20:13:08.419464   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.421838   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422172   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.422199   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422460   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:13:08.422680   62015 machine.go:88] provisioning docker machine ...
	I0103 20:13:08.422702   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:08.422883   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423027   62015 buildroot.go:166] provisioning hostname "no-preload-749210"
	I0103 20:13:08.423047   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423153   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.425105   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425377   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.425408   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425583   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.425734   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425869   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425987   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.426160   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.426488   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.426501   62015 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-749210 && echo "no-preload-749210" | sudo tee /etc/hostname
	I0103 20:13:08.579862   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-749210
	
	I0103 20:13:08.579892   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.583166   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.583635   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583828   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.584039   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584225   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584391   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.584593   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.584928   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.584954   62015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-749210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-749210/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-749210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:08.729661   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:08.729697   62015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:08.729738   62015 buildroot.go:174] setting up certificates
	I0103 20:13:08.729759   62015 provision.go:83] configureAuth start
	I0103 20:13:08.729776   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.730101   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.733282   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733694   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.733728   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733868   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.736223   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736557   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.736589   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736763   62015 provision.go:138] copyHostCerts
	I0103 20:13:08.736830   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:08.736847   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:08.736913   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:08.737035   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:08.737047   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:08.737077   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:08.737177   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:08.737188   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:08.737218   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:08.737295   62015 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.no-preload-749210 san=[192.168.61.245 192.168.61.245 localhost 127.0.0.1 minikube no-preload-749210]
	I0103 20:13:09.018604   62015 provision.go:172] copyRemoteCerts
	I0103 20:13:09.018662   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:09.018684   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.021339   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021729   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.021777   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.022068   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.022220   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.022405   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.120023   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0103 20:13:09.143242   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:09.166206   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:09.192425   62015 provision.go:86] duration metric: configureAuth took 462.649611ms
	I0103 20:13:09.192457   62015 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:09.192678   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:09.192770   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.195193   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195594   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.195633   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.196100   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196272   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196437   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.196637   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.197028   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.197048   62015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:09.528890   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:09.528915   62015 machine.go:91] provisioned docker machine in 1.106221183s
	I0103 20:13:09.528924   62015 start.go:300] post-start starting for "no-preload-749210" (driver="kvm2")
	I0103 20:13:09.528949   62015 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:09.528966   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.529337   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:09.529372   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.532679   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533032   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.533063   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533262   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.533490   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.533675   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.533841   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.632949   62015 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:09.638382   62015 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:09.638421   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:09.638502   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:09.638617   62015 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:09.638744   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:09.650407   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:09.672528   62015 start.go:303] post-start completed in 143.577643ms
	I0103 20:13:09.672560   62015 fix.go:56] fixHost completed within 19.513324819s
	I0103 20:13:09.672585   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.675037   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675398   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.675430   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675587   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.675811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.675963   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.676112   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.676294   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.676674   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.676690   62015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:09.811720   62050 start.go:369] acquired machines lock for "default-k8s-diff-port-018788" in 4m4.205717121s
	I0103 20:13:09.811786   62050 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:09.811797   62050 fix.go:54] fixHost starting: 
	I0103 20:13:09.812213   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:09.812257   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:09.831972   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0103 20:13:09.832420   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:09.832973   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:13:09.833004   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:09.833345   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:09.833505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:09.833637   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:13:09.835476   62050 fix.go:102] recreateIfNeeded on default-k8s-diff-port-018788: state=Stopped err=<nil>
	I0103 20:13:09.835520   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	W0103 20:13:09.835689   62050 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:09.837499   62050 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018788" ...
	I0103 20:13:09.838938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Start
	I0103 20:13:09.839117   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring networks are active...
	I0103 20:13:09.839888   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network default is active
	I0103 20:13:09.840347   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network mk-default-k8s-diff-port-018788 is active
	I0103 20:13:09.840765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Getting domain xml...
	I0103 20:13:09.841599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Creating domain...
	I0103 20:13:09.811571   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312789.764323206
	
	I0103 20:13:09.811601   62015 fix.go:206] guest clock: 1704312789.764323206
	I0103 20:13:09.811611   62015 fix.go:219] Guest: 2024-01-03 20:13:09.764323206 +0000 UTC Remote: 2024-01-03 20:13:09.672564299 +0000 UTC m=+244.986151230 (delta=91.758907ms)
	I0103 20:13:09.811636   62015 fix.go:190] guest clock delta is within tolerance: 91.758907ms
	I0103 20:13:09.811642   62015 start.go:83] releasing machines lock for "no-preload-749210", held for 19.652439302s
	I0103 20:13:09.811678   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.811949   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:09.815012   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815391   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.815429   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815641   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816177   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816363   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816471   62015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:09.816509   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.816620   62015 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:09.816646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.819652   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.819909   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820058   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820088   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820319   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820377   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820581   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820753   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.820822   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820910   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.821007   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.821131   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.949119   62015 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:09.956247   62015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:10.116715   62015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:10.122512   62015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:10.122640   62015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:10.142239   62015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:10.142265   62015 start.go:475] detecting cgroup driver to use...
	I0103 20:13:10.142336   62015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:10.159473   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:10.175492   62015 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:10.175555   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:10.191974   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:10.208639   62015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:10.343228   62015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:10.457642   62015 docker.go:219] disabling docker service ...
	I0103 20:13:10.457720   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:10.475117   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:10.491265   62015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:10.613064   62015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:10.741969   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:10.755923   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:10.775483   62015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:10.775550   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.785489   62015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:10.785557   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.795303   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.804763   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.814559   62015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:10.824431   62015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:10.833193   62015 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:10.833273   62015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:10.850446   62015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:10.861775   62015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:11.021577   62015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:11.217675   62015 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:11.217748   62015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:11.222475   62015 start.go:543] Will wait 60s for crictl version
	I0103 20:13:11.222552   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.226128   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:11.266681   62015 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:11.266775   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.313142   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.358396   62015 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0103 20:13:08.963472   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.462836   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.963771   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.991718   61676 api_server.go:72] duration metric: took 2.029094062s to wait for apiserver process to appear ...
	I0103 20:13:09.991748   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:09.991769   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:09.992264   61676 api_server.go:269] stopped: https://192.168.50.197:8443/healthz: Get "https://192.168.50.197:8443/healthz": dial tcp 192.168.50.197:8443: connect: connection refused
	I0103 20:13:10.491803   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:11.359808   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:11.363074   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363434   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:11.363465   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363695   62015 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:11.367689   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:11.378693   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:13:11.378746   62015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:11.416544   62015 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0103 20:13:11.416570   62015 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:11.416642   62015 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.416698   62015 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.416724   62015 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.416699   62015 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.416929   62015 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0103 20:13:11.416671   62015 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.417054   62015 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.417093   62015 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.418600   62015 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.418621   62015 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.418630   62015 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0103 20:13:11.418646   62015 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.418661   62015 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.418675   62015 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.418685   62015 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.418697   62015 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.635223   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.662007   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.668522   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.671471   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.672069   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.685216   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0103 20:13:11.687462   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.716775   62015 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0103 20:13:11.716825   62015 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.716882   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.762358   62015 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0103 20:13:11.762394   62015 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.762463   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846225   62015 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0103 20:13:11.846268   62015 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.846317   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846432   62015 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0103 20:13:11.846473   62015 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.846529   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846515   62015 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0103 20:13:11.846655   62015 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.846711   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956577   62015 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0103 20:13:11.956659   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.956689   62015 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.956746   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.956760   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956782   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.956820   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.956873   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:12.064715   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:12.064764   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0103 20:13:12.064720   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.064856   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:12.064903   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.068647   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068685   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068752   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068767   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.068771   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068841   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.077600   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0103 20:13:12.077622   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077682   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077798   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0103 20:13:12.109729   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109778   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109838   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109927   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0103 20:13:12.110020   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:12.237011   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279507   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.201800359s)
	I0103 20:13:14.279592   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0103 20:13:14.279606   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.169553787s)
	I0103 20:13:14.279641   62015 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279646   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0103 20:13:14.279645   62015 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.042604307s)
	I0103 20:13:14.279725   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279726   62015 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0103 20:13:14.279760   62015 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279802   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:14.285860   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.246503   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting to get IP...
	I0103 20:13:11.247669   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248203   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248301   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.248165   62835 retry.go:31] will retry after 292.358185ms: waiting for machine to come up
	I0103 20:13:11.541836   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542224   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.542168   62835 retry.go:31] will retry after 370.634511ms: waiting for machine to come up
	I0103 20:13:11.914890   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915372   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915403   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.915330   62835 retry.go:31] will retry after 304.80922ms: waiting for machine to come up
	I0103 20:13:12.221826   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222289   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.222232   62835 retry.go:31] will retry after 534.177843ms: waiting for machine to come up
	I0103 20:13:12.757904   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758422   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.758334   62835 retry.go:31] will retry after 749.166369ms: waiting for machine to come up
	I0103 20:13:13.509343   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:13.509854   62835 retry.go:31] will retry after 716.215015ms: waiting for machine to come up
	I0103 20:13:14.227886   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228388   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228414   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:14.228338   62835 retry.go:31] will retry after 1.095458606s: waiting for machine to come up
	I0103 20:13:15.324880   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325299   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325332   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:15.325250   62835 retry.go:31] will retry after 1.266878415s: waiting for machine to come up
	I0103 20:13:14.427035   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.427077   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.427119   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.462068   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.462115   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.492283   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.500354   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.500391   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:14.991910   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.997522   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.997550   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.492157   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:15.500340   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:15.500377   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.992158   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:16.002940   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:13:16.020171   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:16.020205   61676 api_server.go:131] duration metric: took 6.028448633s to wait for apiserver health ...
	I0103 20:13:16.020216   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:13:16.020226   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:16.022596   61676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:16.024514   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:16.064582   61676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:16.113727   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:16.124984   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:16.125031   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:16.125044   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:16.125061   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:16.125072   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:16.125086   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:16.125097   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:16.125111   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:16.125125   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:16.125140   61676 system_pods.go:74] duration metric: took 11.390906ms to wait for pod list to return data ...
	I0103 20:13:16.125152   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:16.133036   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:16.133072   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:16.133086   61676 node_conditions.go:105] duration metric: took 7.928329ms to run NodePressure ...
	I0103 20:13:16.133109   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:16.519151   61676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530359   61676 kubeadm.go:787] kubelet initialised
	I0103 20:13:16.530380   61676 kubeadm.go:788] duration metric: took 11.203465ms waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530388   61676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:16.540797   61676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.550417   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550457   61676 pod_ready.go:81] duration metric: took 9.627239ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.550475   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550486   61676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.557664   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557693   61676 pod_ready.go:81] duration metric: took 7.191907ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.557705   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557721   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.566973   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567007   61676 pod_ready.go:81] duration metric: took 9.268451ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.567019   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567027   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.587777   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587811   61676 pod_ready.go:81] duration metric: took 20.769874ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.587825   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587832   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.923613   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923643   61676 pod_ready.go:81] duration metric: took 335.80096ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.923655   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923663   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.323875   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323911   61676 pod_ready.go:81] duration metric: took 400.238515ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.323922   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323931   61676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.724694   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724727   61676 pod_ready.go:81] duration metric: took 400.785148ms waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.724741   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724750   61676 pod_ready.go:38] duration metric: took 1.194352759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:17.724774   61676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:17.754724   61676 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:17.754762   61676 kubeadm.go:640] restartCluster took 20.835238159s
	I0103 20:13:17.754774   61676 kubeadm.go:406] StartCluster complete in 20.886921594s
	I0103 20:13:17.754794   61676 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.754875   61676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:17.757638   61676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.759852   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:13:17.759948   61676 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:17.760022   61676 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-451331"
	I0103 20:13:17.760049   61676 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-451331"
	W0103 20:13:17.760060   61676 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:17.760105   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.760154   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:17.760202   61676 addons.go:69] Setting default-storageclass=true in profile "embed-certs-451331"
	I0103 20:13:17.760227   61676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-451331"
	I0103 20:13:17.760525   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760553   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760595   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760619   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760814   61676 addons.go:69] Setting metrics-server=true in profile "embed-certs-451331"
	I0103 20:13:17.760869   61676 addons.go:237] Setting addon metrics-server=true in "embed-certs-451331"
	W0103 20:13:17.760887   61676 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:17.760949   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.761311   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.761367   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.778350   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0103 20:13:17.778603   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0103 20:13:17.778840   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.778947   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.779349   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779369   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779496   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779506   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779894   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.779936   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.780390   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0103 20:13:17.780507   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780528   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.780892   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780933   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.781532   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.782012   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.782030   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.782393   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.782580   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.786209   61676 addons.go:237] Setting addon default-storageclass=true in "embed-certs-451331"
	W0103 20:13:17.786231   61676 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:17.786264   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.786730   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.786761   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.796538   61676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-451331" context rescaled to 1 replicas
	I0103 20:13:17.796579   61676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:17.798616   61676 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:17.800702   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:17.799744   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0103 20:13:17.801004   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0103 20:13:17.801125   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.801622   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.801643   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.801967   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.802456   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.804195   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.804537   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.804683   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.804700   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.806577   61676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:17.805108   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.807681   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0103 20:13:17.808340   61676 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:17.808354   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:17.808371   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.808513   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.809005   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.809510   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.809529   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.809978   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.810778   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.810822   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.812250   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812607   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.812629   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812892   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.812970   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.813069   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.815321   61676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:17.813342   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.817289   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:17.817308   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:17.817336   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.817473   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.820418   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.820892   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.820920   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.821168   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.821350   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.821468   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.821597   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.829857   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0103 20:13:17.830343   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.830847   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.830869   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.831278   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.831432   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.833351   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.833678   61676 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:17.833695   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:17.833714   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.837454   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837708   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.837730   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837975   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.838211   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.838384   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.838534   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:18.036885   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:18.097340   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:18.099953   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:18.099982   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:18.242823   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:18.242847   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:18.309930   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:18.309959   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:18.321992   61676 node_ready.go:35] waiting up to 6m0s for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:18.322077   61676 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:18.366727   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:16.441666   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.161911946s)
	I0103 20:13:16.441698   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0103 20:13:16.441720   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441740   62015 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.155838517s)
	I0103 20:13:16.441767   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441855   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0103 20:13:16.441964   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:20.073248   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.975867864s)
	I0103 20:13:20.073318   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073383   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073265   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.03634078s)
	I0103 20:13:20.073419   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.706641739s)
	I0103 20:13:20.073466   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073490   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073489   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073553   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073744   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073759   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073775   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073786   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073878   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073905   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073935   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073938   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073980   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073992   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074016   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074036   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074073   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.074086   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074309   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074369   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074428   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074476   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074454   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074506   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074558   61676 addons.go:473] Verifying addon metrics-server=true in "embed-certs-451331"
	I0103 20:13:20.077560   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.077613   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.077653   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.088401   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.088441   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.088845   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.090413   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.090439   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.092641   61676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0103 20:13:16.593786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594320   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594352   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:16.594229   62835 retry.go:31] will retry after 1.232411416s: waiting for machine to come up
	I0103 20:13:17.828286   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832049   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832078   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:17.828787   62835 retry.go:31] will retry after 2.020753248s: waiting for machine to come up
	I0103 20:13:19.851119   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851683   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:19.851595   62835 retry.go:31] will retry after 2.720330873s: waiting for machine to come up
	I0103 20:13:20.094375   61676 addons.go:508] enable addons completed in 2.334425533s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0103 20:13:20.325950   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:22.327709   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:19.820972   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.379182556s)
	I0103 20:13:19.821009   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0103 20:13:19.821032   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.820976   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.378974193s)
	I0103 20:13:19.821081   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.821092   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0103 20:13:21.294764   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.47365805s)
	I0103 20:13:21.294796   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0103 20:13:21.294826   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:21.294879   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:24.067996   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.773083678s)
	I0103 20:13:24.068031   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0103 20:13:24.068071   62015 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:24.068131   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:22.573532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573959   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:22.573882   62835 retry.go:31] will retry after 2.869192362s: waiting for machine to come up
	I0103 20:13:25.444272   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444774   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444801   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:25.444710   62835 retry.go:31] will retry after 3.61848561s: waiting for machine to come up
	I0103 20:13:24.327795   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:24.831015   61676 node_ready.go:49] node "embed-certs-451331" has status "Ready":"True"
	I0103 20:13:24.831037   61676 node_ready.go:38] duration metric: took 6.509012992s waiting for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:24.831046   61676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:24.838244   61676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345945   61676 pod_ready.go:92] pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.345980   61676 pod_ready.go:81] duration metric: took 507.709108ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345991   61676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352763   61676 pod_ready.go:92] pod "etcd-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.352798   61676 pod_ready.go:81] duration metric: took 6.794419ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352812   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359491   61676 pod_ready.go:92] pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.359533   61676 pod_ready.go:81] duration metric: took 6.711829ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359547   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867866   61676 pod_ready.go:92] pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.867898   61676 pod_ready.go:81] duration metric: took 508.341809ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867912   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026106   61676 pod_ready.go:92] pod "kube-proxy-fsnb9" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.026140   61676 pod_ready.go:81] duration metric: took 158.216243ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026153   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428480   61676 pod_ready.go:92] pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.428506   61676 pod_ready.go:81] duration metric: took 402.345241ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428525   61676 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:28.438138   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:27.768745   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.700590535s)
	I0103 20:13:27.768774   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0103 20:13:27.768797   62015 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:27.768833   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:28.718165   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0103 20:13:28.718231   62015 cache_images.go:123] Successfully loaded all cached images
	I0103 20:13:28.718239   62015 cache_images.go:92] LoadImages completed in 17.301651166s
	I0103 20:13:28.718342   62015 ssh_runner.go:195] Run: crio config
	I0103 20:13:28.770786   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:28.770813   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:28.770838   62015 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:28.770862   62015 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.245 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-749210 NodeName:no-preload-749210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:28.771031   62015 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-749210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:28.771103   62015 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-749210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:13:28.771163   62015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 20:13:28.780756   62015 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:28.780834   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:28.789160   62015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0103 20:13:28.804638   62015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 20:13:28.820113   62015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0103 20:13:28.835707   62015 ssh_runner.go:195] Run: grep 192.168.61.245	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:28.839456   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:28.850530   62015 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210 for IP: 192.168.61.245
	I0103 20:13:28.850581   62015 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:28.850730   62015 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:28.850770   62015 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:28.850833   62015 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.key
	I0103 20:13:28.850886   62015 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key.5dd805e0
	I0103 20:13:28.850922   62015 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key
	I0103 20:13:28.851054   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:28.851081   62015 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:28.851093   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:28.851117   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:28.851139   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:28.851168   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:28.851210   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:28.851832   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:28.874236   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:13:28.896624   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:28.919016   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:13:28.941159   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:28.963311   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:28.985568   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:29.007709   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:29.030188   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:29.052316   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:29.076761   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:29.101462   62015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:29.118605   62015 ssh_runner.go:195] Run: openssl version
	I0103 20:13:29.124144   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:29.133148   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137750   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137809   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.143321   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:29.152302   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:29.161551   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166396   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166457   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.173179   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:29.184167   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:29.194158   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198763   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198836   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.204516   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:29.214529   62015 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:29.218834   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:29.225036   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:29.231166   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:29.237200   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:29.243158   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:29.249694   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:29.255582   62015 kubeadm.go:404] StartCluster: {Name:no-preload-749210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:29.255672   62015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:29.255758   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:29.299249   62015 cri.go:89] found id: ""
	I0103 20:13:29.299346   62015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:29.311210   62015 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:29.311227   62015 kubeadm.go:636] restartCluster start
	I0103 20:13:29.311271   62015 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:29.320430   62015 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:29.321471   62015 kubeconfig.go:92] found "no-preload-749210" server: "https://192.168.61.245:8443"
	I0103 20:13:29.324643   62015 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:29.333237   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.333300   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.345156   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.219284   61400 start.go:369] acquired machines lock for "old-k8s-version-927922" in 54.622555379s
	I0103 20:13:30.219352   61400 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:30.219364   61400 fix.go:54] fixHost starting: 
	I0103 20:13:30.219739   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:30.219770   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:30.235529   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0103 20:13:30.235926   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:30.236537   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:13:30.236562   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:30.236911   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:30.237121   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:30.237293   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:13:30.238979   61400 fix.go:102] recreateIfNeeded on old-k8s-version-927922: state=Stopped err=<nil>
	I0103 20:13:30.239006   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	W0103 20:13:30.239155   61400 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:30.241210   61400 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-927922" ...
	I0103 20:13:29.067586   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068030   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Found IP for machine: 192.168.39.139
	I0103 20:13:29.068048   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserving static IP address...
	I0103 20:13:29.068090   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has current primary IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.068532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | skip adding static IP to network mk-default-k8s-diff-port-018788 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"}
	I0103 20:13:29.068549   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserved static IP address: 192.168.39.139
	I0103 20:13:29.068571   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for SSH to be available...
	I0103 20:13:29.068608   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Getting to WaitForSSH function...
	I0103 20:13:29.071139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071587   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.071620   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071779   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH client type: external
	I0103 20:13:29.071810   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa (-rw-------)
	I0103 20:13:29.071858   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:29.071879   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | About to run SSH command:
	I0103 20:13:29.071896   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | exit 0
	I0103 20:13:29.166962   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:29.167365   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetConfigRaw
	I0103 20:13:29.167989   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.170671   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171052   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.171092   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171325   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:13:29.171564   62050 machine.go:88] provisioning docker machine ...
	I0103 20:13:29.171589   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.171866   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172058   62050 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018788"
	I0103 20:13:29.172084   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172253   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.175261   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175626   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.175660   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175749   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.175943   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176219   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176392   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.176611   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.177083   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.177105   62050 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018788 && echo "default-k8s-diff-port-018788" | sudo tee /etc/hostname
	I0103 20:13:29.304876   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018788
	
	I0103 20:13:29.304915   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.307645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308124   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.308190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.308619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308799   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308997   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.309177   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.309652   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.309682   62050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018788/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:29.431479   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:29.431517   62050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:29.431555   62050 buildroot.go:174] setting up certificates
	I0103 20:13:29.431569   62050 provision.go:83] configureAuth start
	I0103 20:13:29.431582   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.431900   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.435012   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435482   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.435517   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435638   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.437865   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438267   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.438303   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438388   62050 provision.go:138] copyHostCerts
	I0103 20:13:29.438448   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:29.438461   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:29.438527   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:29.438625   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:29.438633   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:29.438653   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:29.438713   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:29.438720   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:29.438738   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:29.438787   62050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018788 san=[192.168.39.139 192.168.39.139 localhost 127.0.0.1 minikube default-k8s-diff-port-018788]
	I0103 20:13:29.494476   62050 provision.go:172] copyRemoteCerts
	I0103 20:13:29.494562   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:29.494590   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.497330   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497597   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.497623   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.497956   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.498139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.498268   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:29.583531   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:29.605944   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0103 20:13:29.630747   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:29.656325   62050 provision.go:86] duration metric: configureAuth took 224.741883ms
	I0103 20:13:29.656355   62050 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:29.656525   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:29.656619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.659656   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.660213   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660434   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.660643   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.660864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.661019   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.661217   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.661571   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.661588   62050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:29.970938   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:29.970966   62050 machine.go:91] provisioned docker machine in 799.385733ms
	I0103 20:13:29.970975   62050 start.go:300] post-start starting for "default-k8s-diff-port-018788" (driver="kvm2")
	I0103 20:13:29.970985   62050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:29.971007   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.971387   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:29.971418   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.974114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974487   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.974562   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974706   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.974894   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.975075   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.975227   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.061987   62050 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:30.066591   62050 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:30.066620   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:30.066704   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:30.066795   62050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:30.066899   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:30.076755   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:30.099740   62050 start.go:303] post-start completed in 128.750887ms
	I0103 20:13:30.099763   62050 fix.go:56] fixHost completed within 20.287967183s
	I0103 20:13:30.099782   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.102744   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.103177   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103409   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.103633   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.103846   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.104080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.104308   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:30.104680   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:30.104696   62050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:30.219120   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312810.161605674
	
	I0103 20:13:30.219145   62050 fix.go:206] guest clock: 1704312810.161605674
	I0103 20:13:30.219154   62050 fix.go:219] Guest: 2024-01-03 20:13:30.161605674 +0000 UTC Remote: 2024-01-03 20:13:30.099767061 +0000 UTC m=+264.645600185 (delta=61.838613ms)
	I0103 20:13:30.219191   62050 fix.go:190] guest clock delta is within tolerance: 61.838613ms
	I0103 20:13:30.219202   62050 start.go:83] releasing machines lock for "default-k8s-diff-port-018788", held for 20.407440359s
	I0103 20:13:30.219230   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.219551   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:30.222200   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222616   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.222650   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222811   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223411   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223568   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223643   62050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:30.223686   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.223940   62050 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:30.223970   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.226394   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226746   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.226777   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227274   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.227443   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227446   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227567   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227595   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.227739   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227972   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.315855   62050 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:30.359117   62050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:30.499200   62050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:30.505296   62050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:30.505768   62050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:30.520032   62050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:30.520059   62050 start.go:475] detecting cgroup driver to use...
	I0103 20:13:30.520146   62050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:30.532684   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:30.545152   62050 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:30.545222   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:30.558066   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:30.570999   62050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:30.682484   62050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:30.802094   62050 docker.go:219] disabling docker service ...
	I0103 20:13:30.802171   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:30.815796   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:30.827982   62050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:30.952442   62050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:31.068759   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:31.083264   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:31.102893   62050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:31.102979   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.112366   62050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:31.112433   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.122940   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.133385   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.144251   62050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:31.155210   62050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:31.164488   62050 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:31.164552   62050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:31.177632   62050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:31.186983   62050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:31.309264   62050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:31.493626   62050 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:31.493706   62050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:31.504103   62050 start.go:543] Will wait 60s for crictl version
	I0103 20:13:31.504187   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:13:31.507927   62050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:31.543967   62050 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:31.544046   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.590593   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.639562   62050 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:13:30.242808   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Start
	I0103 20:13:30.242991   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring networks are active...
	I0103 20:13:30.243776   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network default is active
	I0103 20:13:30.244126   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network mk-old-k8s-version-927922 is active
	I0103 20:13:30.244504   61400 main.go:141] libmachine: (old-k8s-version-927922) Getting domain xml...
	I0103 20:13:30.245244   61400 main.go:141] libmachine: (old-k8s-version-927922) Creating domain...
	I0103 20:13:31.553239   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting to get IP...
	I0103 20:13:31.554409   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.554942   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.555022   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.554922   63030 retry.go:31] will retry after 192.654673ms: waiting for machine to come up
	I0103 20:13:31.749588   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.750035   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.750058   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.750000   63030 retry.go:31] will retry after 270.810728ms: waiting for machine to come up
	I0103 20:13:32.022736   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.023310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.023337   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.023280   63030 retry.go:31] will retry after 327.320898ms: waiting for machine to come up
	I0103 20:13:32.352845   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.353453   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.353501   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.353395   63030 retry.go:31] will retry after 575.525231ms: waiting for machine to come up
	I0103 20:13:32.930217   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.930833   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.930859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.930741   63030 retry.go:31] will retry after 571.986596ms: waiting for machine to come up
	I0103 20:13:30.936363   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:32.939164   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:29.833307   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.833374   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.844819   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.333870   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.333936   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.345802   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.833281   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.833400   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.848469   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.334071   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.334151   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.346445   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.833944   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.834034   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.848925   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.333349   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.333432   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.349173   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.833632   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.833696   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.848186   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.333659   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.333757   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.349560   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.834221   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.834309   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.846637   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:34.334219   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.334299   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.350703   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.641182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:31.644371   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644677   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:31.644712   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644971   62050 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:31.649106   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:31.662256   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:13:31.662380   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:31.701210   62050 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:13:31.701275   62050 ssh_runner.go:195] Run: which lz4
	I0103 20:13:31.704890   62050 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:31.708756   62050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:31.708783   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:13:33.543202   62050 crio.go:444] Took 1.838336 seconds to copy over tarball
	I0103 20:13:33.543282   62050 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:33.504797   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:33.505336   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:33.505363   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:33.505286   63030 retry.go:31] will retry after 593.865088ms: waiting for machine to come up
	I0103 20:13:34.101055   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:34.101559   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:34.101593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:34.101507   63030 retry.go:31] will retry after 1.016460442s: waiting for machine to come up
	I0103 20:13:35.119877   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:35.120383   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:35.120415   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:35.120352   63030 retry.go:31] will retry after 1.462823241s: waiting for machine to come up
	I0103 20:13:36.585467   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:36.585968   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:36.585993   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:36.585932   63030 retry.go:31] will retry after 1.213807131s: waiting for machine to come up
	I0103 20:13:37.801504   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:37.801970   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:37.801999   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:37.801896   63030 retry.go:31] will retry after 1.961227471s: waiting for machine to come up
	I0103 20:13:35.435661   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:37.435870   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:34.834090   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.834160   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.848657   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.333723   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.333809   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.348582   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.834128   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.834208   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.845911   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.333385   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.333512   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.346391   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.833978   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.834054   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.847134   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.333698   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.333785   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.346411   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.834024   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.834141   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.846961   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.333461   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.333665   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.346713   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.834378   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.834470   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.848473   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.333266   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.333347   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.345638   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.345664   62015 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:39.345692   62015 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:39.345721   62015 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:39.345792   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:39.387671   62015 cri.go:89] found id: ""
	I0103 20:13:39.387778   62015 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:39.403523   62015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:39.413114   62015 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:39.413188   62015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421503   62015 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421527   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:39.561406   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:36.473303   62050 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.929985215s)
	I0103 20:13:36.473337   62050 crio.go:451] Took 2.930104 seconds to extract the tarball
	I0103 20:13:36.473350   62050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:36.513202   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:36.557201   62050 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:13:36.557231   62050 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:13:36.557314   62050 ssh_runner.go:195] Run: crio config
	I0103 20:13:36.618916   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:36.618948   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:36.618982   62050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:36.619007   62050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.139 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018788 NodeName:default-k8s-diff-port-018788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:36.619167   62050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.139
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:36.619242   62050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-018788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0103 20:13:36.619294   62050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:13:36.628488   62050 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:36.628571   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:36.637479   62050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0103 20:13:36.652608   62050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:13:36.667432   62050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0103 20:13:36.683138   62050 ssh_runner.go:195] Run: grep 192.168.39.139	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:36.687022   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:36.698713   62050 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788 for IP: 192.168.39.139
	I0103 20:13:36.698755   62050 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:36.698948   62050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:36.699009   62050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:36.699098   62050 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.key
	I0103 20:13:36.699157   62050 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key.7716debd
	I0103 20:13:36.699196   62050 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key
	I0103 20:13:36.699287   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:36.699314   62050 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:36.699324   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:36.699349   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:36.699370   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:36.699395   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:36.699434   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:36.700045   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:36.721872   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:13:36.744733   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:36.772245   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 20:13:36.796690   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:36.819792   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:36.843109   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:36.866679   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:36.889181   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:36.912082   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:36.935621   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:36.959090   62050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:36.974873   62050 ssh_runner.go:195] Run: openssl version
	I0103 20:13:36.980449   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:36.990278   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995822   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995903   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:37.001504   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:37.011628   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:37.021373   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025697   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025752   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.031286   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:37.041075   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:37.050789   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055584   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055647   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.061079   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:37.070792   62050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:37.075050   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:37.081170   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:37.087372   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:37.093361   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:37.099203   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:37.104932   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:37.110783   62050 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:37.110955   62050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:37.111003   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:37.146687   62050 cri.go:89] found id: ""
	I0103 20:13:37.146766   62050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:37.156789   62050 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:37.156808   62050 kubeadm.go:636] restartCluster start
	I0103 20:13:37.156882   62050 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:37.166168   62050 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.167346   62050 kubeconfig.go:92] found "default-k8s-diff-port-018788" server: "https://192.168.39.139:8444"
	I0103 20:13:37.169750   62050 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:37.178965   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.179035   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.190638   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.679072   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.679142   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.691149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.179709   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.179804   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.191656   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.679825   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.679912   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.693380   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.179927   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.180042   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.193368   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.679947   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.680049   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.692444   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:40.179510   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.179600   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.192218   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.764226   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:39.764651   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:39.764681   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:39.764592   63030 retry.go:31] will retry after 2.38598238s: waiting for machine to come up
	I0103 20:13:42.151992   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:42.152486   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:42.152517   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:42.152435   63030 retry.go:31] will retry after 3.320569317s: waiting for machine to come up
	I0103 20:13:39.438887   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:41.441552   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:40.707462   62015 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.146014282s)
	I0103 20:13:40.707501   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:40.913812   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.008294   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.093842   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:41.093931   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:41.594484   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.094333   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.594647   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.094744   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.594323   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.628624   62015 api_server.go:72] duration metric: took 2.534781213s to wait for apiserver process to appear ...
	I0103 20:13:43.628653   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:43.628674   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:40.679867   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.679959   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.692707   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.179865   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.179962   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.192901   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.679604   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.679668   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.691755   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.179959   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.180082   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.193149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.679682   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.679808   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.696777   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.179236   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.179343   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.195021   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.679230   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.679339   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.696886   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.179488   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.179558   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.194865   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.679087   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.679216   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.693383   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.179505   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.179607   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.190496   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.474145   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:45.474596   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:45.474623   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:45.474542   63030 retry.go:31] will retry after 3.652901762s: waiting for machine to come up
	I0103 20:13:43.937146   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:45.938328   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.941499   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.277935   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:47.277971   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:47.277988   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.543418   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.543449   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:47.629720   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.635340   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.635373   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.128849   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.135534   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:48.135576   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.628977   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.634609   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:13:48.643475   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:13:48.643505   62015 api_server.go:131] duration metric: took 5.01484434s to wait for apiserver health ...
	I0103 20:13:48.643517   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:48.643526   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:48.645945   62015 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:48.647556   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:48.671093   62015 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:48.698710   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:48.712654   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:48.712704   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:48.712717   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:48.712729   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:48.712739   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:48.712761   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:48.712771   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:48.712780   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:48.712793   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:48.712806   62015 system_pods.go:74] duration metric: took 14.071881ms to wait for pod list to return data ...
	I0103 20:13:48.712818   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:48.716271   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:48.716301   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:48.716326   62015 node_conditions.go:105] duration metric: took 3.496257ms to run NodePressure ...
	I0103 20:13:48.716348   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:49.020956   62015 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:49.025982   62015 kubeadm.go:787] kubelet initialised
	I0103 20:13:49.026003   62015 kubeadm.go:788] duration metric: took 5.022549ms waiting for restarted kubelet to initialise ...
	I0103 20:13:49.026010   62015 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:49.033471   62015 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.038777   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038806   62015 pod_ready.go:81] duration metric: took 5.286579ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.038823   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038834   62015 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.044324   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044349   62015 pod_ready.go:81] duration metric: took 5.506628ms waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.044357   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044363   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.049022   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049058   62015 pod_ready.go:81] duration metric: took 4.681942ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.049068   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049073   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.102378   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102407   62015 pod_ready.go:81] duration metric: took 53.323019ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.102415   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102424   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.504820   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504852   62015 pod_ready.go:81] duration metric: took 402.417876ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.504865   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504875   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.905230   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905265   62015 pod_ready.go:81] duration metric: took 400.380902ms waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.905278   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905287   62015 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:50.304848   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304883   62015 pod_ready.go:81] duration metric: took 399.567527ms waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:50.304896   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304905   62015 pod_ready.go:38] duration metric: took 1.278887327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:50.304926   62015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:50.331405   62015 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:50.331428   62015 kubeadm.go:640] restartCluster took 21.020194358s
	I0103 20:13:50.331439   62015 kubeadm.go:406] StartCluster complete in 21.075864121s
	I0103 20:13:50.331459   62015 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.331541   62015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:50.333553   62015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.333969   62015 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:50.334045   62015 addons.go:69] Setting storage-provisioner=true in profile "no-preload-749210"
	I0103 20:13:50.334064   62015 addons.go:237] Setting addon storage-provisioner=true in "no-preload-749210"
	W0103 20:13:50.334072   62015 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:50.334082   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:50.334121   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.334129   62015 addons.go:69] Setting default-storageclass=true in profile "no-preload-749210"
	I0103 20:13:50.334143   62015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-749210"
	I0103 20:13:50.334556   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334588   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334602   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334620   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334681   62015 addons.go:69] Setting metrics-server=true in profile "no-preload-749210"
	I0103 20:13:50.334708   62015 addons.go:237] Setting addon metrics-server=true in "no-preload-749210"
	I0103 20:13:50.334712   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	W0103 20:13:50.334717   62015 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:50.334756   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.335152   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.335190   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.343173   62015 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-749210" context rescaled to 1 replicas
	I0103 20:13:50.343213   62015 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:50.345396   62015 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:50.347721   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:50.353122   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I0103 20:13:50.353250   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0103 20:13:50.353274   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0103 20:13:50.353737   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.353896   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354283   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354299   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354488   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354491   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354588   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354889   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355115   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.355165   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.355181   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.355244   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355746   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.356199   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356239   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.356792   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356830   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.359095   62015 addons.go:237] Setting addon default-storageclass=true in "no-preload-749210"
	W0103 20:13:50.359114   62015 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:50.359139   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.359554   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.359595   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.377094   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0103 20:13:50.377218   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0103 20:13:50.377679   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.377779   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.378353   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378376   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378472   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378488   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378816   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.378874   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.381013   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.381240   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.389265   62015 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:50.383848   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I0103 20:13:50.391000   62015 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.391023   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:50.391049   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391062   62015 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:45.679265   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.679374   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.690232   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.179862   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.179963   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.190942   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.679624   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.679738   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.691578   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.179185   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:47.179280   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:47.193995   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.194029   62050 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:47.194050   62050 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:47.194061   62050 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:47.194114   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:47.235512   62050 cri.go:89] found id: ""
	I0103 20:13:47.235625   62050 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:47.251115   62050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:47.261566   62050 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:47.261631   62050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271217   62050 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271244   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:47.408550   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.262356   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.492357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.597607   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.699097   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:48.699194   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.199349   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.699758   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.199818   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.392557   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:50.392577   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:50.392597   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391469   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.393835   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.393854   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.394340   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.394967   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395384   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.395419   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.395602   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.395663   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.395683   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.395981   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.396173   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.398544   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399117   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.399142   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399363   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.399582   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.399692   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.399761   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.434719   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I0103 20:13:50.435279   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.435938   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.435972   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.436407   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.436630   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.438992   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.442816   62015 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.442835   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:50.442856   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.450157   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.451549   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.451575   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.451571   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.453023   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.453577   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.453753   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.556135   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:50.556161   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:50.583620   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:50.583643   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:50.589708   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.614203   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.631936   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.631961   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:50.708658   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.772364   62015 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:50.772434   62015 node_ready.go:35] waiting up to 6m0s for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:51.785361   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195620446s)
	I0103 20:13:51.785407   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785421   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785427   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171187695s)
	I0103 20:13:51.785463   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785488   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785603   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076908391s)
	I0103 20:13:51.785687   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785717   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.785730   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.785739   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785741   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785748   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785819   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785837   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786108   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786143   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786152   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786166   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.786178   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786444   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786495   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786536   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786553   62015 addons.go:473] Verifying addon metrics-server=true in "no-preload-749210"
	I0103 20:13:51.787346   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787365   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787376   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.787386   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.787596   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787638   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787652   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787855   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787859   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787871   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.797560   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.797584   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.797860   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.797874   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.800087   62015 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0103 20:13:49.131462   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132013   61400 main.go:141] libmachine: (old-k8s-version-927922) Found IP for machine: 192.168.72.12
	I0103 20:13:49.132041   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserving static IP address...
	I0103 20:13:49.132059   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has current primary IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132507   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.132543   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | skip adding static IP to network mk-old-k8s-version-927922 - found existing host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"}
	I0103 20:13:49.132560   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserved static IP address: 192.168.72.12
	I0103 20:13:49.132582   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting for SSH to be available...
	I0103 20:13:49.132597   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Getting to WaitForSSH function...
	I0103 20:13:49.135129   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135499   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.135536   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135703   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH client type: external
	I0103 20:13:49.135728   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa (-rw-------)
	I0103 20:13:49.135765   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:49.135780   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | About to run SSH command:
	I0103 20:13:49.135796   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | exit 0
	I0103 20:13:49.226568   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:49.226890   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetConfigRaw
	I0103 20:13:49.227536   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.230668   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231038   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.231064   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231277   61400 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/config.json ...
	I0103 20:13:49.231456   61400 machine.go:88] provisioning docker machine ...
	I0103 20:13:49.231473   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:49.231708   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.231862   61400 buildroot.go:166] provisioning hostname "old-k8s-version-927922"
	I0103 20:13:49.231885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.232002   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.234637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235012   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.235048   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235196   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.235338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235445   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235543   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.235748   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.236196   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.236226   61400 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-927922 && echo "old-k8s-version-927922" | sudo tee /etc/hostname
	I0103 20:13:49.377588   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-927922
	
	I0103 20:13:49.377625   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.381244   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381634   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.381680   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.382115   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382311   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382538   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.382721   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.383096   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.383125   61400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-927922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-927922/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-927922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:49.517214   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:49.517246   61400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:49.517268   61400 buildroot.go:174] setting up certificates
	I0103 20:13:49.517280   61400 provision.go:83] configureAuth start
	I0103 20:13:49.517299   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.517606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.520819   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521255   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.521284   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521442   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.523926   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.524364   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524495   61400 provision.go:138] copyHostCerts
	I0103 20:13:49.524604   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:49.524618   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:49.524714   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:49.524842   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:49.524855   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:49.524885   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:49.524982   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:49.525020   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:49.525063   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:49.525143   61400 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-927922 san=[192.168.72.12 192.168.72.12 localhost 127.0.0.1 minikube old-k8s-version-927922]
	I0103 20:13:49.896621   61400 provision.go:172] copyRemoteCerts
	I0103 20:13:49.896687   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:49.896728   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.899859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900239   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.900274   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900456   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.900690   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.900873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.901064   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:49.993569   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:13:50.017597   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:13:50.041139   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:50.064499   61400 provision.go:86] duration metric: configureAuth took 547.178498ms
	I0103 20:13:50.064533   61400 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:50.064770   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:13:50.064848   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.068198   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.068672   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.069080   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069284   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069457   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.069640   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.070115   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.070146   61400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:50.450845   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:50.450873   61400 machine.go:91] provisioned docker machine in 1.219404511s
	I0103 20:13:50.450886   61400 start.go:300] post-start starting for "old-k8s-version-927922" (driver="kvm2")
	I0103 20:13:50.450899   61400 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:50.450924   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.451263   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:50.451328   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.455003   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455413   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.455436   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455644   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.455796   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.455919   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.456031   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.563846   61400 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:50.569506   61400 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:50.569532   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:50.569626   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:50.569726   61400 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:50.569857   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:50.581218   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:50.612328   61400 start.go:303] post-start completed in 161.425373ms
	I0103 20:13:50.612359   61400 fix.go:56] fixHost completed within 20.392994827s
	I0103 20:13:50.612383   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.615776   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616241   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.616268   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616368   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.616655   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.616849   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.617088   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.617286   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.617764   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.617791   61400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:50.740437   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312830.691065491
	
	I0103 20:13:50.740506   61400 fix.go:206] guest clock: 1704312830.691065491
	I0103 20:13:50.740528   61400 fix.go:219] Guest: 2024-01-03 20:13:50.691065491 +0000 UTC Remote: 2024-01-03 20:13:50.612363446 +0000 UTC m=+357.606588552 (delta=78.702045ms)
	I0103 20:13:50.740563   61400 fix.go:190] guest clock delta is within tolerance: 78.702045ms
	I0103 20:13:50.740574   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 20.521248173s
	I0103 20:13:50.740606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.740879   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:50.743952   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744357   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.744397   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744668   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.745932   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746302   61400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:50.746343   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.746759   61400 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:50.746784   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.749593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.749994   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.750029   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.750496   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.750738   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.750900   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.751141   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.751696   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.751779   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.751842   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.751898   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.751960   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.752031   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.752063   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.841084   61400 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:50.882564   61400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:51.041188   61400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:51.049023   61400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:51.049103   61400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:51.068267   61400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:51.068297   61400 start.go:475] detecting cgroup driver to use...
	I0103 20:13:51.068371   61400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:51.086266   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:51.101962   61400 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:51.102030   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:51.118269   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:51.134642   61400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:51.310207   61400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:51.495609   61400 docker.go:219] disabling docker service ...
	I0103 20:13:51.495743   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:51.512101   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:51.527244   61400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:51.696874   61400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:51.836885   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:51.849905   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:51.867827   61400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0103 20:13:51.867895   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.877598   61400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:51.877713   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.886744   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.898196   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.910021   61400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:51.921882   61400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:51.930668   61400 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:51.930727   61400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:51.943294   61400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:51.952273   61400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:52.065108   61400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:52.272042   61400 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:52.272143   61400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:52.277268   61400 start.go:543] Will wait 60s for crictl version
	I0103 20:13:52.277436   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:52.281294   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:52.334056   61400 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:52.334231   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.390900   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.454400   61400 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0103 20:13:52.455682   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:52.459194   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.459656   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:52.459683   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.460250   61400 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:52.465579   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:52.480500   61400 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 20:13:52.480620   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:52.532378   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:52.532450   61400 ssh_runner.go:195] Run: which lz4
	I0103 20:13:52.537132   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:52.541880   61400 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:52.541912   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0103 20:13:50.443235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:52.942235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:51.801673   62015 addons.go:508] enable addons completed in 1.467711333s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0103 20:13:52.779944   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.699945   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.199773   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.227739   62050 api_server.go:72] duration metric: took 2.52863821s to wait for apiserver process to appear ...
	I0103 20:13:51.227768   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:51.227789   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:51.228288   62050 api_server.go:269] stopped: https://192.168.39.139:8444/healthz: Get "https://192.168.39.139:8444/healthz": dial tcp 192.168.39.139:8444: connect: connection refused
	I0103 20:13:51.728906   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.679221   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.679255   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.679273   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.722466   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.722528   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.728699   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.771739   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:55.771841   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.228041   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.234578   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.234618   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.728122   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.734464   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.734505   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:57.228124   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:57.239527   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:13:57.253416   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:57.253445   62050 api_server.go:131] duration metric: took 6.025669125s to wait for apiserver health ...
	I0103 20:13:57.253456   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:57.253464   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:57.255608   62050 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:54.091654   61400 crio.go:444] Took 1.554550 seconds to copy over tarball
	I0103 20:13:54.091734   61400 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:57.252728   61400 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.160960283s)
	I0103 20:13:57.252762   61400 crio.go:451] Took 3.161068 seconds to extract the tarball
	I0103 20:13:57.252773   61400 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:57.307431   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:57.362170   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:57.362199   61400 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:57.362266   61400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.362306   61400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.362491   61400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.362505   61400 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0103 20:13:57.362630   61400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.362663   61400 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.362749   61400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.362830   61400 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364964   61400 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364981   61400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.364999   61400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.365049   61400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.365081   61400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.365159   61400 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0103 20:13:57.365337   61400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.365364   61400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.585886   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.611291   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0103 20:13:57.622467   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.623443   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.627321   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.630211   61400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0103 20:13:57.630253   61400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.630299   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.647358   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.670079   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.724516   61400 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0103 20:13:57.724560   61400 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0103 20:13:57.724606   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.747338   61400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0103 20:13:57.747387   61400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.747451   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767682   61400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0103 20:13:57.767741   61400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.767749   61400 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0103 20:13:57.767772   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.767782   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767778   61400 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.767834   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811841   61400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0103 20:13:57.811895   61400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.811861   61400 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0103 20:13:57.811948   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811984   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0103 20:13:57.811948   61400 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.812053   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.812098   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.812128   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.849648   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0103 20:13:57.849722   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.916421   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0103 20:13:57.916483   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.916529   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0103 20:13:57.936449   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.936474   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0103 20:13:57.936485   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0103 20:13:57.936538   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0103 20:13:55.436957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:57.441634   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:55.278078   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:57.280673   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:58.185787   62015 node_ready.go:49] node "no-preload-749210" has status "Ready":"True"
	I0103 20:13:58.185819   62015 node_ready.go:38] duration metric: took 7.413368774s waiting for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:58.185837   62015 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:58.196599   62015 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203024   62015 pod_ready.go:92] pod "coredns-76f75df574-rbx58" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:58.203047   62015 pod_ready.go:81] duration metric: took 6.423108ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203057   62015 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:57.257123   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:57.293641   62050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:57.341721   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:57.360995   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:57.361054   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:57.361065   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:57.361109   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:57.361132   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:57.361147   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:57.361171   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:57.361189   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:57.361198   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:57.361207   62050 system_pods.go:74] duration metric: took 19.402129ms to wait for pod list to return data ...
	I0103 20:13:57.361218   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:57.369396   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:57.369435   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:57.369449   62050 node_conditions.go:105] duration metric: took 8.224276ms to run NodePressure ...
	I0103 20:13:57.369470   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:57.615954   62050 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624280   62050 kubeadm.go:787] kubelet initialised
	I0103 20:13:57.624312   62050 kubeadm.go:788] duration metric: took 8.328431ms waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624321   62050 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:57.637920   62050 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.734401   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734439   62050 pod_ready.go:81] duration metric: took 1.096478242s waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:58.734454   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734463   62050 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:59.605120   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605156   62050 pod_ready.go:81] duration metric: took 870.676494ms waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:59.605168   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605174   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.176543   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176583   62050 pod_ready.go:81] duration metric: took 571.400586ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.176599   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176608   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.201556   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201620   62050 pod_ready.go:81] duration metric: took 24.987825ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.201637   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201647   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.233069   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233108   62050 pod_ready.go:81] duration metric: took 31.451633ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.233127   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233135   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.253505   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253534   62050 pod_ready.go:81] duration metric: took 20.386039ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.253550   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253559   62050 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.272626   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272661   62050 pod_ready.go:81] duration metric: took 19.09311ms waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.272677   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272687   62050 pod_ready.go:38] duration metric: took 2.64835186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:00.272705   62050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:00.321126   62050 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:00.321189   62050 kubeadm.go:640] restartCluster took 23.164374098s
	I0103 20:14:00.321205   62050 kubeadm.go:406] StartCluster complete in 23.210428007s
	I0103 20:14:00.321226   62050 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.321322   62050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:00.323470   62050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.323925   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:00.324242   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:14:00.324381   62050 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:00.324467   62050 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.324487   62050 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.324495   62050 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:00.324536   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.324984   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325013   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325285   62050 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325304   62050 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325329   62050 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.325337   62050 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:00.325376   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.325309   62050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018788"
	I0103 20:14:00.325722   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325740   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325935   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.326021   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.347496   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0103 20:14:00.347895   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.348392   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.348415   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.348728   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.349192   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.349228   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.349916   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0103 20:14:00.350369   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.351043   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.351067   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.351579   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.352288   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.352392   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.358540   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0103 20:14:00.359079   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.359582   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.359607   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.359939   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.360114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.364583   62050 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.364614   62050 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:00.364645   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.365032   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.365080   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.365268   62050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-018788" context rescaled to 1 replicas
	I0103 20:14:00.365315   62050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:00.367628   62050 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:00.376061   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:00.382421   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0103 20:14:00.382601   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0103 20:14:00.382708   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0103 20:14:00.383285   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383310   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383855   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.383862   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.384200   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384674   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.384701   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.384740   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384914   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.386513   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.387010   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.387325   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.387343   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.389302   62050 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:00.390931   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:00.390945   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:00.390960   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.390651   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.392318   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.394641   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395185   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.395212   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395483   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.395954   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.398448   62050 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:00.400431   62050 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.400454   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:00.400476   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.404480   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405112   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.405145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.405971   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.407610   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.407808   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.410796   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.410964   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.411436   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.417626   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0103 20:14:00.418201   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.422710   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.422743   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.423232   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.423421   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.425364   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.425678   62050 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.425697   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:00.425717   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.429190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429720   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.429745   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429898   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.430599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.430803   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.430946   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.621274   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:00.621356   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:00.641979   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.681414   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.682076   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:00.682118   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:00.760063   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.760095   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:00.833648   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.840025   62050 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:00.840147   62050 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.78156374s)
	I0103 20:14:02.423631   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423646   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.742133551s)
	I0103 20:14:02.423765   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423784   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423889   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.423906   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.423920   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423930   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424042   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424061   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424078   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.424076   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.424104   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424125   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424137   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424472   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424489   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424502   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.431339   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.431368   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.431754   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.431789   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.431809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.575829   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742131608s)
	I0103 20:14:02.575880   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.575899   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576351   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576374   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576391   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.576400   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576619   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576632   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576641   62050 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-018788"
	I0103 20:14:02.578918   62050 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:13:58.180342   61400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0103 20:13:58.180407   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0103 20:13:58.180464   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0103 20:13:58.194447   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:58.726157   61400 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0103 20:13:58.726232   61400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0103 20:14:00.187852   61400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.461700942s)
	I0103 20:14:00.187973   61400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461718478s)
	I0103 20:14:00.188007   61400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0103 20:14:00.188104   61400 cache_images.go:92] LoadImages completed in 2.825887616s
	W0103 20:14:00.188202   61400 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0103 20:14:00.188285   61400 ssh_runner.go:195] Run: crio config
	I0103 20:14:00.270343   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:00.270372   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:00.270393   61400 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:14:00.270416   61400 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.12 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-927922 NodeName:old-k8s-version-927922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 20:14:00.270624   61400 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-927922"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-927922
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.12:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:14:00.270765   61400 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-927922 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:14:00.270842   61400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0103 20:14:00.282011   61400 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:14:00.282093   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:14:00.292954   61400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0103 20:14:00.314616   61400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:14:00.366449   61400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0103 20:14:00.406579   61400 ssh_runner.go:195] Run: grep 192.168.72.12	control-plane.minikube.internal$ /etc/hosts
	I0103 20:14:00.410923   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:14:00.430315   61400 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922 for IP: 192.168.72.12
	I0103 20:14:00.430352   61400 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.430553   61400 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:14:00.430619   61400 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:14:00.430718   61400 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.key
	I0103 20:14:00.430798   61400 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key.9a91cab3
	I0103 20:14:00.430854   61400 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key
	I0103 20:14:00.431018   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:14:00.431071   61400 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:14:00.431083   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:14:00.431123   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:14:00.431158   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:14:00.431195   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:14:00.431250   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:14:00.432123   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:14:00.472877   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:14:00.505153   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:14:00.533850   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:14:00.564548   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:14:00.596464   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:14:00.626607   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:14:00.655330   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:14:00.681817   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:14:00.711039   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:14:00.742406   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:14:00.768583   61400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:14:00.786833   61400 ssh_runner.go:195] Run: openssl version
	I0103 20:14:00.793561   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:14:00.807558   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812755   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812816   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.820657   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:14:00.832954   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:14:00.844707   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850334   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850425   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.856592   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:14:00.868105   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:14:00.881551   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886462   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886550   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.892487   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:14:00.904363   61400 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:14:00.909429   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:14:00.915940   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:14:00.922496   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:14:00.928504   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:14:00.936016   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:14:00.943008   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:14:00.949401   61400 kubeadm.go:404] StartCluster: {Name:old-k8s-version-927922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:14:00.949524   61400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:14:00.949614   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:00.999406   61400 cri.go:89] found id: ""
	I0103 20:14:00.999494   61400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:14:01.011041   61400 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:14:01.011063   61400 kubeadm.go:636] restartCluster start
	I0103 20:14:01.011130   61400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:14:01.024488   61400 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.026094   61400 kubeconfig.go:92] found "old-k8s-version-927922" server: "https://192.168.72.12:8443"
	I0103 20:14:01.029577   61400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:14:01.041599   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.041674   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.055545   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.542034   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.542135   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.554826   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.042049   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.042166   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.056693   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.542275   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.542363   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.557025   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:03.041864   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.041968   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.054402   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:59.937077   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.440275   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.287822   62015 pod_ready.go:102] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.712464   62015 pod_ready.go:92] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.712486   62015 pod_ready.go:81] duration metric: took 2.509421629s waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.712494   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722133   62015 pod_ready.go:92] pod "kube-apiserver-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.722175   62015 pod_ready.go:81] duration metric: took 9.671952ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722188   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728860   62015 pod_ready.go:92] pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.728888   62015 pod_ready.go:81] duration metric: took 6.691622ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728901   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736669   62015 pod_ready.go:92] pod "kube-proxy-5hwf4" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.736690   62015 pod_ready.go:81] duration metric: took 7.783204ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736699   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245720   62015 pod_ready.go:92] pod "kube-scheduler-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:02.245750   62015 pod_ready.go:81] duration metric: took 1.509042822s waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245764   62015 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:04.253082   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.580440   62050 addons.go:508] enable addons completed in 2.256058454s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:02.845486   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:05.343961   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:03.542326   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.542407   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.554128   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.041685   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.041779   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.053727   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.542332   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.542417   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.554478   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.042026   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.042120   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.055763   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.541892   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.541996   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.554974   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.042576   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.042675   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.055902   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.542543   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.542636   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.555494   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.041757   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.041844   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.053440   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.542083   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.542162   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.555336   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:08.041841   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.041929   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.055229   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.936356   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.938795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.754049   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:09.253568   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.345058   62050 node_ready.go:49] node "default-k8s-diff-port-018788" has status "Ready":"True"
	I0103 20:14:06.345083   62050 node_ready.go:38] duration metric: took 5.505020144s waiting for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:06.345094   62050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:06.351209   62050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357786   62050 pod_ready.go:92] pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:06.357811   62050 pod_ready.go:81] duration metric: took 6.576128ms waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357819   62050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:08.365570   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.366402   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:08.542285   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.542428   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.554155   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.041695   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.041800   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.054337   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.541733   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.541817   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.554231   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.041785   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.041863   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.053870   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.541893   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.541988   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.554220   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.042579   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:11.042662   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:11.054683   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.054717   61400 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:14:11.054728   61400 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:14:11.054738   61400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:14:11.054804   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:11.099741   61400 cri.go:89] found id: ""
	I0103 20:14:11.099806   61400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:14:11.115939   61400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:14:11.125253   61400 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:14:11.125309   61400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134126   61400 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134151   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:11.244373   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.026578   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.238755   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.326635   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.411494   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:12.411597   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:12.912324   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:09.437304   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.937833   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.755341   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.254295   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.864860   62050 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.864892   62050 pod_ready.go:81] duration metric: took 4.507065243s waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.864906   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871510   62050 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.871532   62050 pod_ready.go:81] duration metric: took 6.618246ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871542   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877385   62050 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.877411   62050 pod_ready.go:81] duration metric: took 5.859396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877423   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883355   62050 pod_ready.go:92] pod "kube-proxy-wqjlv" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.883381   62050 pod_ready.go:81] duration metric: took 5.949857ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883391   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888160   62050 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.888186   62050 pod_ready.go:81] duration metric: took 4.782893ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888198   62050 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:12.896310   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.897306   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:13.412544   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.912006   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.939301   61400 api_server.go:72] duration metric: took 1.527807222s to wait for apiserver process to appear ...
	I0103 20:14:13.939328   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:13.939357   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:13.941001   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.438272   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.752567   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.758446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:17.397429   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:19.399199   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.940403   61400 api_server.go:269] stopped: https://192.168.72.12:8443/healthz: Get "https://192.168.72.12:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0103 20:14:18.940444   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.563874   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.563907   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.563925   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.591366   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.591397   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.939684   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.951743   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:19.951795   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.439712   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.448251   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:20.448289   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.939773   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.946227   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:20.954666   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:20.954702   61400 api_server.go:131] duration metric: took 7.015366394s to wait for apiserver health ...
	I0103 20:14:20.954718   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:20.954726   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:20.956786   61400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:14:20.958180   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:14:20.969609   61400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:14:20.986353   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:20.996751   61400 system_pods.go:59] 8 kube-system pods found
	I0103 20:14:20.996786   61400 system_pods.go:61] "coredns-5644d7b6d9-99qhg" [d43c98b2-5ed4-42a7-bdb9-28f5b3c7b99f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:14:20.996795   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:20.996804   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:20.996811   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:20.996821   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Pending
	I0103 20:14:20.996828   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:20.996835   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:20.996845   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:20.996857   61400 system_pods.go:74] duration metric: took 10.474644ms to wait for pod list to return data ...
	I0103 20:14:20.996870   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:21.000635   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:21.000665   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:21.000677   61400 node_conditions.go:105] duration metric: took 3.80125ms to run NodePressure ...
	I0103 20:14:21.000698   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:21.233310   61400 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241408   61400 kubeadm.go:787] kubelet initialised
	I0103 20:14:21.241445   61400 kubeadm.go:788] duration metric: took 8.096237ms waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241456   61400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:21.251897   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.264624   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264657   61400 pod_ready.go:81] duration metric: took 12.728783ms waiting for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.264670   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264700   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.282371   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282400   61400 pod_ready.go:81] duration metric: took 17.657706ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.282410   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282416   61400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.288986   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289016   61400 pod_ready.go:81] duration metric: took 6.590018ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.289028   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289036   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.391318   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391358   61400 pod_ready.go:81] duration metric: took 102.309139ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.391371   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391390   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.790147   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790184   61400 pod_ready.go:81] duration metric: took 398.776559ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.790202   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790213   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.190088   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190118   61400 pod_ready.go:81] duration metric: took 399.895826ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.190132   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190146   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.590412   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590470   61400 pod_ready.go:81] duration metric: took 400.308646ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.590484   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590494   61400 pod_ready.go:38] duration metric: took 1.349028144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:22.590533   61400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:22.610035   61400 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:22.610060   61400 kubeadm.go:640] restartCluster took 21.598991094s
	I0103 20:14:22.610071   61400 kubeadm.go:406] StartCluster complete in 21.660680377s
	I0103 20:14:22.610091   61400 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.610178   61400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:22.613053   61400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.613314   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:22.613472   61400 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:22.613563   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:14:22.613570   61400 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613584   61400 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613597   61400 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613625   61400 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-927922"
	W0103 20:14:22.613637   61400 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:22.613639   61400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-927922"
	I0103 20:14:22.613605   61400 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-927922"
	W0103 20:14:22.613706   61400 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:22.613769   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.613691   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.614097   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614129   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614170   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614204   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614293   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614334   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.631032   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0103 20:14:22.631689   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.632149   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.632172   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.632553   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.632811   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0103 20:14:22.632820   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0103 20:14:22.633222   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633340   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.633352   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633385   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.633695   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.633719   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634106   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634117   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.634139   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634544   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634711   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.634782   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.634821   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.639076   61400 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-927922"
	W0103 20:14:22.639233   61400 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:22.639274   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.640636   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.640703   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.653581   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0103 20:14:22.654135   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.654693   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.654720   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.655050   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.655267   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.655611   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45149
	I0103 20:14:22.656058   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.656503   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.656527   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.656976   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.657189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.657904   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.660090   61400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:22.659044   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.659283   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0103 20:14:22.663010   61400 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.663022   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:22.663037   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.664758   61400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:22.663341   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.665665   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666177   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.666201   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666255   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:22.666266   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:22.666282   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.666382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.666505   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.666726   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.666884   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.666901   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.666926   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.667344   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.667940   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.667983   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.668718   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.668933   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.668961   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.669116   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.669262   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.669388   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.669506   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.711545   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0103 20:14:22.711969   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.712493   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.712519   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.712853   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.713077   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.715086   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.715371   61400 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.715390   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:22.715405   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.718270   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718638   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.718671   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718876   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.719076   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.719263   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.719451   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.795601   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.887631   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:22.887660   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:22.889717   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.932293   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:22.932324   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:22.939480   61400 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:22.979425   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:22.979455   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:23.017495   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:23.255786   61400 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-927922" context rescaled to 1 replicas
	I0103 20:14:23.255832   61400 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:23.257785   61400 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:18.937821   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.435750   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.438082   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.259380   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:23.430371   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430402   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430532   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430557   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430710   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.430741   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.430778   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.430798   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430806   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432333   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432345   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432353   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432363   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432373   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.432382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432383   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432394   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432602   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432654   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432674   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438313   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.438335   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.438566   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.438585   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438662   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.598304   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598363   61400 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:23.598669   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598687   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598696   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598705   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598917   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598938   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598960   61400 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-927922"
	I0103 20:14:23.601038   61400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:14:21.253707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.254276   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.399352   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.895781   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.602562   61400 addons.go:508] enable addons completed in 989.095706ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:25.602268   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:27.602561   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:25.439366   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:27.934938   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:25.753982   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.253688   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:26.398696   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.896789   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:29.603040   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:30.102640   61400 node_ready.go:49] node "old-k8s-version-927922" has status "Ready":"True"
	I0103 20:14:30.102663   61400 node_ready.go:38] duration metric: took 6.504277703s waiting for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:30.102672   61400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.107593   61400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112792   61400 pod_ready.go:92] pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.112817   61400 pod_ready.go:81] duration metric: took 5.195453ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112828   61400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117802   61400 pod_ready.go:92] pod "etcd-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.117827   61400 pod_ready.go:81] duration metric: took 4.989616ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117839   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123548   61400 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.123570   61400 pod_ready.go:81] duration metric: took 5.723206ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123580   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128232   61400 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.128257   61400 pod_ready.go:81] duration metric: took 4.670196ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128269   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503735   61400 pod_ready.go:92] pod "kube-proxy-jk7jw" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.503782   61400 pod_ready.go:81] duration metric: took 375.504442ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503796   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903117   61400 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.903145   61400 pod_ready.go:81] duration metric: took 399.341883ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903155   61400 pod_ready.go:38] duration metric: took 800.474934ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.903167   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:30.903215   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:30.917506   61400 api_server.go:72] duration metric: took 7.661640466s to wait for apiserver process to appear ...
	I0103 20:14:30.917537   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:30.917558   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:30.923921   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:30.924810   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:30.924830   61400 api_server.go:131] duration metric: took 7.286806ms to wait for apiserver health ...
	I0103 20:14:30.924837   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:31.105108   61400 system_pods.go:59] 7 kube-system pods found
	I0103 20:14:31.105140   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.105144   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.105149   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.105153   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.105156   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.105160   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.105164   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.105168   61400 system_pods.go:74] duration metric: took 180.326535ms to wait for pod list to return data ...
	I0103 20:14:31.105176   61400 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:14:31.303919   61400 default_sa.go:45] found service account: "default"
	I0103 20:14:31.303945   61400 default_sa.go:55] duration metric: took 198.763782ms for default service account to be created ...
	I0103 20:14:31.303952   61400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:14:31.504913   61400 system_pods.go:86] 7 kube-system pods found
	I0103 20:14:31.504942   61400 system_pods.go:89] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.504948   61400 system_pods.go:89] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.504952   61400 system_pods.go:89] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.504960   61400 system_pods.go:89] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.504964   61400 system_pods.go:89] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.504967   61400 system_pods.go:89] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.504971   61400 system_pods.go:89] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.504978   61400 system_pods.go:126] duration metric: took 201.020363ms to wait for k8s-apps to be running ...
	I0103 20:14:31.504987   61400 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:14:31.505042   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:31.519544   61400 system_svc.go:56] duration metric: took 14.547054ms WaitForService to wait for kubelet.
	I0103 20:14:31.519581   61400 kubeadm.go:581] duration metric: took 8.263723255s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:14:31.519604   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:31.703367   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:31.703393   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:31.703402   61400 node_conditions.go:105] duration metric: took 183.794284ms to run NodePressure ...
	I0103 20:14:31.703413   61400 start.go:228] waiting for startup goroutines ...
	I0103 20:14:31.703419   61400 start.go:233] waiting for cluster config update ...
	I0103 20:14:31.703427   61400 start.go:242] writing updated cluster config ...
	I0103 20:14:31.703726   61400 ssh_runner.go:195] Run: rm -f paused
	I0103 20:14:31.752491   61400 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0103 20:14:31.754609   61400 out.go:177] 
	W0103 20:14:31.756132   61400 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0103 20:14:31.757531   61400 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0103 20:14:31.758908   61400 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-927922" cluster and "default" namespace by default
	I0103 20:14:29.937557   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.437025   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.253875   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.752584   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.898036   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:33.398935   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.936535   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.436533   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.753233   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.252419   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.253992   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:35.896170   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.897520   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:40.397608   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.438748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.439514   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.254480   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.756719   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:42.397869   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:44.398305   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.935597   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:45.936279   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:47.939184   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.253445   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:48.254497   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.896653   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:49.395106   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.436008   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:52.436929   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.754391   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.253984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:51.396664   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.895621   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:54.937380   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.435980   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:55.254262   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.254379   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:56.399473   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:58.895378   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.436517   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:01.436644   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:03.437289   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.754343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.256605   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:00.896080   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.896456   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.396614   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.935218   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.936528   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:04.753320   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:06.753702   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:08.754470   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.909774   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.398298   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.435847   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.436285   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.755735   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:13.260340   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.898368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.395141   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:14.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:16.437752   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.753850   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.252984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:17.396224   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:19.396412   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.935744   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.936627   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:22.937157   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.753996   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.252893   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:21.396466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.396556   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.435441   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.437177   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.253294   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.257573   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.895526   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.897999   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:30.396749   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.935811   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:31.936769   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.754895   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.252296   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.252439   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.398706   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.895914   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.435649   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.435937   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.253151   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.753045   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.897764   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:39.395522   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.935209   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.935922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:42.936185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.753242   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.254160   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:41.395722   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.895476   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:44.938043   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.436185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.753607   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.757575   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.895628   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.898831   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.395366   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:49.437057   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:51.936658   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.254313   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.754096   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.396047   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:54.896005   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:53.937359   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.939092   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:58.435858   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.253159   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:57.752873   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:56.897368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.397094   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:00.937099   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:02.937220   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.753924   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.754227   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:04.253189   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.895645   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:03.895950   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:05.435964   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:07.437247   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.753405   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.252564   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.395775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:08.397119   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.937945   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:12.436531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:11.254482   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.753409   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:10.898350   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.397549   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:14.936753   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.438482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.753689   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:18.253420   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.895365   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.897998   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.898464   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.935559   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:21.935664   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:20.253748   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.253878   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.254457   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.395466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.400100   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:23.935958   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:25.936631   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:28.436748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.752881   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.253740   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.897228   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.396925   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:30.436921   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:32.939573   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.254681   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.759891   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.895948   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.899819   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:35.436828   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:37.437536   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.252972   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.254083   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.396572   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.895816   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:39.440085   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:41.939589   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.752960   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:42.753342   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.897788   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:43.396277   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.437295   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:46.934854   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.753613   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.253118   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:45.896539   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.897012   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:50.399452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:48.936795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:51.435353   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:53.436742   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:49.753890   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.252908   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.253390   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.895504   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.896960   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:55.937358   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.435997   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.256446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.754312   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.898710   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.899652   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:00.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:02.936336   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.254343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.754483   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.398833   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.896269   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.437531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.935848   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.755471   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.756171   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.897369   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:08.397436   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:09.936237   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:11.940482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.253599   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:12.254176   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.254316   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.898370   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:13.400421   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.436922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.936283   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.753503   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.253120   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:15.896003   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:18.396552   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.438479   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.936957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.253522   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.752947   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:20.895961   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.395452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:24.435005   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437797   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437828   61676 pod_ready.go:81] duration metric: took 4m0.009294112s waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:17:26.437841   61676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:17:26.437850   61676 pod_ready.go:38] duration metric: took 4m1.606787831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:17:26.437868   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:17:26.437901   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:26.437951   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:26.499917   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.499942   61676 cri.go:89] found id: ""
	I0103 20:17:26.499958   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:26.500014   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.504239   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:26.504290   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:26.539965   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.539992   61676 cri.go:89] found id: ""
	I0103 20:17:26.540001   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:26.540052   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.544591   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:26.544667   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:26.583231   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:26.583256   61676 cri.go:89] found id: ""
	I0103 20:17:26.583265   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:26.583328   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.587642   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:26.587705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:26.625230   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.625258   61676 cri.go:89] found id: ""
	I0103 20:17:26.625267   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:26.625329   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.629448   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:26.629527   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:26.666698   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:26.666726   61676 cri.go:89] found id: ""
	I0103 20:17:26.666736   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:26.666796   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.671434   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:26.671500   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:26.703900   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:26.703921   61676 cri.go:89] found id: ""
	I0103 20:17:26.703929   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:26.703986   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.707915   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:26.707979   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:26.747144   61676 cri.go:89] found id: ""
	I0103 20:17:26.747168   61676 logs.go:284] 0 containers: []
	W0103 20:17:26.747182   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:26.747189   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:26.747246   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:26.786418   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:26.786441   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:26.786448   61676 cri.go:89] found id: ""
	I0103 20:17:26.786456   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:26.786515   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.790506   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.794304   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:26.794330   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:26.851272   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:26.851317   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.894480   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:26.894508   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.941799   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:26.941826   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.981759   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:26.981793   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:27.021318   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:27.021347   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:27.061320   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:27.061351   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:27.110137   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:27.110169   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:27.123548   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:27.123582   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:27.162644   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:27.162678   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:27.211599   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:27.211636   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:27.361299   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:27.361329   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:27.866123   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:27.866166   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:25.753957   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:27.754559   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:25.896204   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:28.395594   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.418870   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:17:30.433778   61676 api_server.go:72] duration metric: took 4m12.637164197s to wait for apiserver process to appear ...
	I0103 20:17:30.433801   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:17:30.433838   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:30.433911   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:30.472309   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:30.472337   61676 cri.go:89] found id: ""
	I0103 20:17:30.472348   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:30.472407   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.476787   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:30.476858   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:30.522290   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:30.522322   61676 cri.go:89] found id: ""
	I0103 20:17:30.522334   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:30.522390   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.526502   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:30.526581   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:30.568301   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.568328   61676 cri.go:89] found id: ""
	I0103 20:17:30.568335   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:30.568382   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.572398   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:30.572454   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:30.611671   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:30.611694   61676 cri.go:89] found id: ""
	I0103 20:17:30.611702   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:30.611749   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.615971   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:30.616035   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:30.658804   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:30.658830   61676 cri.go:89] found id: ""
	I0103 20:17:30.658839   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:30.658889   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.662859   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:30.662930   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:30.705941   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:30.705968   61676 cri.go:89] found id: ""
	I0103 20:17:30.705976   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:30.706031   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.710228   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:30.710308   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:30.749052   61676 cri.go:89] found id: ""
	I0103 20:17:30.749077   61676 logs.go:284] 0 containers: []
	W0103 20:17:30.749088   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:30.749096   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:30.749157   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:30.786239   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.786267   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.786273   61676 cri.go:89] found id: ""
	I0103 20:17:30.786280   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:30.786341   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.790680   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.794294   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:30.794320   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.835916   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:30.835952   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.876225   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:30.876255   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.917657   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:30.917684   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:30.930805   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:30.930831   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:31.060049   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:31.060086   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:31.119725   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:31.119754   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:31.164226   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:31.164261   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:31.204790   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:31.204816   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:31.264949   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:31.264984   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:31.658178   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:31.658217   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:31.712090   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:31.712125   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:31.757333   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:31.757364   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:30.253170   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.753056   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.896380   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.896512   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:35.399775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:34.304692   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:17:34.311338   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:17:34.312603   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:17:34.312624   61676 api_server.go:131] duration metric: took 3.878815002s to wait for apiserver health ...
	I0103 20:17:34.312632   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:17:34.312651   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:34.312705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:34.347683   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.347701   61676 cri.go:89] found id: ""
	I0103 20:17:34.347711   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:34.347769   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.351695   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:34.351773   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:34.386166   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.386188   61676 cri.go:89] found id: ""
	I0103 20:17:34.386197   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:34.386259   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.390352   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:34.390427   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:34.427772   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.427801   61676 cri.go:89] found id: ""
	I0103 20:17:34.427811   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:34.427872   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.432258   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:34.432324   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:34.471746   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.471789   61676 cri.go:89] found id: ""
	I0103 20:17:34.471812   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:34.471878   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.476656   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:34.476729   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:34.514594   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:34.514626   61676 cri.go:89] found id: ""
	I0103 20:17:34.514685   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:34.514779   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.518789   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:34.518849   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:34.555672   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.555698   61676 cri.go:89] found id: ""
	I0103 20:17:34.555707   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:34.555771   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.560278   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:34.560338   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:34.598718   61676 cri.go:89] found id: ""
	I0103 20:17:34.598742   61676 logs.go:284] 0 containers: []
	W0103 20:17:34.598753   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:34.598759   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:34.598810   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:34.635723   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:34.635751   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:34.635758   61676 cri.go:89] found id: ""
	I0103 20:17:34.635767   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:34.635814   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.640466   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.644461   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:34.644490   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:34.659819   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:34.659856   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.697807   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:34.697840   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.745366   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:34.745397   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.804885   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:34.804919   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.848753   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:34.848784   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.891492   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:34.891524   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:35.234093   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:35.234133   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:35.281396   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:35.281425   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:35.317595   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:35.317622   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:35.357552   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:35.357600   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:35.405369   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:35.405394   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:35.459496   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:35.459535   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:38.101844   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:17:38.101870   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.101875   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.101879   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.101886   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.101892   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.101898   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.101907   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.101919   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.101931   61676 system_pods.go:74] duration metric: took 3.789293156s to wait for pod list to return data ...
	I0103 20:17:38.101940   61676 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:17:38.104360   61676 default_sa.go:45] found service account: "default"
	I0103 20:17:38.104386   61676 default_sa.go:55] duration metric: took 2.437157ms for default service account to be created ...
	I0103 20:17:38.104395   61676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:17:38.110198   61676 system_pods.go:86] 8 kube-system pods found
	I0103 20:17:38.110226   61676 system_pods.go:89] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.110233   61676 system_pods.go:89] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.110241   61676 system_pods.go:89] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.110247   61676 system_pods.go:89] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.110254   61676 system_pods.go:89] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.110262   61676 system_pods.go:89] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.110272   61676 system_pods.go:89] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.110287   61676 system_pods.go:89] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.110300   61676 system_pods.go:126] duration metric: took 5.897003ms to wait for k8s-apps to be running ...
	I0103 20:17:38.110310   61676 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:17:38.110359   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:17:38.129025   61676 system_svc.go:56] duration metric: took 18.705736ms WaitForService to wait for kubelet.
	I0103 20:17:38.129071   61676 kubeadm.go:581] duration metric: took 4m20.332460734s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:17:38.129104   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:17:38.132674   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:17:38.132703   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:17:38.132718   61676 node_conditions.go:105] duration metric: took 3.608193ms to run NodePressure ...
	I0103 20:17:38.132803   61676 start.go:228] waiting for startup goroutines ...
	I0103 20:17:38.132830   61676 start.go:233] waiting for cluster config update ...
	I0103 20:17:38.132846   61676 start.go:242] writing updated cluster config ...
	I0103 20:17:38.133198   61676 ssh_runner.go:195] Run: rm -f paused
	I0103 20:17:38.185728   61676 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:17:38.187862   61676 out.go:177] * Done! kubectl is now configured to use "embed-certs-451331" cluster and "default" namespace by default
	I0103 20:17:34.753175   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.254091   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.896317   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:40.396299   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:39.752580   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:41.755418   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:44.253073   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:42.897389   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:45.396646   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:46.253958   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:48.753284   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:47.398164   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:49.895246   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:50.754133   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.253046   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:51.895627   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.897877   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:55.254029   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:57.752707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:56.398655   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:58.897483   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:59.753306   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:01.753500   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:02.255901   62015 pod_ready.go:81] duration metric: took 4m0.010124972s waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:02.255929   62015 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:02.255939   62015 pod_ready.go:38] duration metric: took 4m4.070078749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:02.255957   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:02.255989   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:02.256064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:02.312578   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:02.312606   62015 cri.go:89] found id: ""
	I0103 20:18:02.312616   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:02.312679   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.317969   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:02.318064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:02.361423   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.361451   62015 cri.go:89] found id: ""
	I0103 20:18:02.361464   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:02.361527   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.365691   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:02.365772   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:02.415087   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:02.415118   62015 cri.go:89] found id: ""
	I0103 20:18:02.415128   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:02.415188   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.419409   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:02.419493   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:02.459715   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:02.459744   62015 cri.go:89] found id: ""
	I0103 20:18:02.459754   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:02.459816   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.464105   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:02.464186   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:02.515523   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:02.515547   62015 cri.go:89] found id: ""
	I0103 20:18:02.515556   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:02.515619   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.519586   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:02.519646   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:02.561187   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.561210   62015 cri.go:89] found id: ""
	I0103 20:18:02.561219   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:02.561288   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.566206   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:02.566289   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:02.610993   62015 cri.go:89] found id: ""
	I0103 20:18:02.611019   62015 logs.go:284] 0 containers: []
	W0103 20:18:02.611029   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:02.611036   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:02.611111   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:02.651736   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:02.651764   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:02.651771   62015 cri.go:89] found id: ""
	I0103 20:18:02.651779   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:02.651839   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.656284   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.660614   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:02.660636   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.707759   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:02.707804   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.766498   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:02.766551   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:03.227838   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:03.227884   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:03.269131   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:03.269174   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:03.307383   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:03.307410   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:03.362005   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:03.362043   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:03.412300   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:03.412333   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:03.448896   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:03.448922   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:03.587950   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:03.587982   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:03.629411   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:03.629438   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:03.672468   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:03.672499   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:03.685645   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:03.685682   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:01.395689   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:03.396256   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:06.229417   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:06.244272   62015 api_server.go:72] duration metric: took 4m15.901019711s to wait for apiserver process to appear ...
	I0103 20:18:06.244306   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:06.244351   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:06.244412   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:06.292204   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.292235   62015 cri.go:89] found id: ""
	I0103 20:18:06.292246   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:06.292309   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.296724   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:06.296791   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:06.333984   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.334012   62015 cri.go:89] found id: ""
	I0103 20:18:06.334023   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:06.334079   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.338045   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:06.338123   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:06.374586   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:06.374610   62015 cri.go:89] found id: ""
	I0103 20:18:06.374617   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:06.374669   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.378720   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:06.378792   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:06.416220   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:06.416240   62015 cri.go:89] found id: ""
	I0103 20:18:06.416247   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:06.416300   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.420190   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:06.420247   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:06.458725   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:06.458745   62015 cri.go:89] found id: ""
	I0103 20:18:06.458754   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:06.458808   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.462703   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:06.462759   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:06.504559   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:06.504587   62015 cri.go:89] found id: ""
	I0103 20:18:06.504596   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:06.504659   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.508602   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:06.508662   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:06.559810   62015 cri.go:89] found id: ""
	I0103 20:18:06.559833   62015 logs.go:284] 0 containers: []
	W0103 20:18:06.559840   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:06.559846   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:06.559905   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:06.598672   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.598697   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.598704   62015 cri.go:89] found id: ""
	I0103 20:18:06.598712   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:06.598766   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.602828   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.607033   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:06.607050   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:06.758606   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:06.758634   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.797521   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:06.797552   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:06.856126   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:06.856159   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.902629   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:06.902656   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.953115   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:06.953154   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.993311   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:06.993342   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:07.393614   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:07.393655   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:07.408367   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:07.408397   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:07.446725   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:07.446756   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:07.494564   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:07.494595   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:07.529151   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:07.529176   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:07.577090   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:07.577118   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:05.895682   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:08.395751   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.396488   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.133806   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:18:10.138606   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:18:10.139965   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:18:10.139986   62015 api_server.go:131] duration metric: took 3.895673488s to wait for apiserver health ...
	I0103 20:18:10.140004   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:10.140032   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.140078   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.177309   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.177336   62015 cri.go:89] found id: ""
	I0103 20:18:10.177347   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:10.177398   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.181215   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.181287   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:10.217151   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.217174   62015 cri.go:89] found id: ""
	I0103 20:18:10.217183   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:10.217242   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.221363   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:10.221447   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:10.271359   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:10.271387   62015 cri.go:89] found id: ""
	I0103 20:18:10.271397   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:10.271460   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.277366   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:10.277439   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:10.325567   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:10.325594   62015 cri.go:89] found id: ""
	I0103 20:18:10.325604   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:10.325662   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.331222   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:10.331292   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:10.370488   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:10.370516   62015 cri.go:89] found id: ""
	I0103 20:18:10.370539   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:10.370598   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.375213   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:10.375272   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:10.417606   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.417626   62015 cri.go:89] found id: ""
	I0103 20:18:10.417633   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:10.417678   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.421786   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:10.421848   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:10.459092   62015 cri.go:89] found id: ""
	I0103 20:18:10.459119   62015 logs.go:284] 0 containers: []
	W0103 20:18:10.459129   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:10.459136   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:10.459184   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:10.504845   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:10.504874   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.504879   62015 cri.go:89] found id: ""
	I0103 20:18:10.504886   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:10.504935   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.509189   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.513671   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:10.513692   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.553961   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:10.553988   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:10.606422   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:10.606463   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:10.620647   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:10.620677   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.678322   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:10.678358   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:10.806514   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:10.806569   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:10.862551   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:10.862589   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.917533   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:10.917566   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.988668   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:10.988702   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:11.030485   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.030549   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:11.425651   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:11.425686   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:11.481991   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:11.482019   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:11.526299   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:11.526335   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:14.082821   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:14.082847   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.082853   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.082857   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.082861   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.082865   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.082870   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.082876   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.082881   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.082887   62015 system_pods.go:74] duration metric: took 3.942878112s to wait for pod list to return data ...
	I0103 20:18:14.082893   62015 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:14.087079   62015 default_sa.go:45] found service account: "default"
	I0103 20:18:14.087106   62015 default_sa.go:55] duration metric: took 4.207195ms for default service account to be created ...
	I0103 20:18:14.087115   62015 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:14.094161   62015 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:14.094185   62015 system_pods.go:89] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.094190   62015 system_pods.go:89] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.094195   62015 system_pods.go:89] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.094199   62015 system_pods.go:89] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.094204   62015 system_pods.go:89] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.094208   62015 system_pods.go:89] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.094219   62015 system_pods.go:89] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.094231   62015 system_pods.go:89] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.094244   62015 system_pods.go:126] duration metric: took 7.123869ms to wait for k8s-apps to be running ...
	I0103 20:18:14.094256   62015 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:14.094305   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:14.110365   62015 system_svc.go:56] duration metric: took 16.099582ms WaitForService to wait for kubelet.
	I0103 20:18:14.110400   62015 kubeadm.go:581] duration metric: took 4m23.767155373s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:14.110439   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:14.113809   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:14.113833   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:14.113842   62015 node_conditions.go:105] duration metric: took 3.394645ms to run NodePressure ...
	I0103 20:18:14.113853   62015 start.go:228] waiting for startup goroutines ...
	I0103 20:18:14.113859   62015 start.go:233] waiting for cluster config update ...
	I0103 20:18:14.113868   62015 start.go:242] writing updated cluster config ...
	I0103 20:18:14.114102   62015 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:14.163090   62015 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0103 20:18:14.165173   62015 out.go:177] * Done! kubectl is now configured to use "no-preload-749210" cluster and "default" namespace by default
	I0103 20:18:10.896026   62050 pod_ready.go:81] duration metric: took 4m0.007814497s waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:10.896053   62050 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:10.896062   62050 pod_ready.go:38] duration metric: took 4m4.550955933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:10.896076   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:10.896109   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.896169   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.965458   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:10.965485   62050 cri.go:89] found id: ""
	I0103 20:18:10.965494   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:10.965552   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.970818   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.970890   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:11.014481   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.014509   62050 cri.go:89] found id: ""
	I0103 20:18:11.014537   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:11.014602   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.019157   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:11.019220   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:11.068101   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:11.068129   62050 cri.go:89] found id: ""
	I0103 20:18:11.068138   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:11.068202   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.075018   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:11.075098   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:11.122838   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.122862   62050 cri.go:89] found id: ""
	I0103 20:18:11.122871   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:11.122925   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.128488   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:11.128563   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:11.178133   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:11.178160   62050 cri.go:89] found id: ""
	I0103 20:18:11.178170   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:11.178233   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.182823   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:11.182900   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:11.229175   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.229207   62050 cri.go:89] found id: ""
	I0103 20:18:11.229218   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:11.229271   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.238617   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:11.238686   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:11.289070   62050 cri.go:89] found id: ""
	I0103 20:18:11.289107   62050 logs.go:284] 0 containers: []
	W0103 20:18:11.289115   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:11.289121   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:11.289204   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:11.333342   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.333365   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.333370   62050 cri.go:89] found id: ""
	I0103 20:18:11.333376   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:11.333430   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.338236   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.342643   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:11.342668   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:11.395443   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:11.395471   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:11.561224   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:11.561258   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.619642   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:11.619677   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.656329   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:11.656370   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.710651   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:11.710685   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.756839   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:11.756866   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.791885   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:11.791920   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:11.805161   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.805185   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:12.261916   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:12.261973   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:12.316486   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:12.316525   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:12.367998   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:12.368032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:12.404277   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:12.404316   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:14.943727   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:14.959322   62050 api_server.go:72] duration metric: took 4m14.593955756s to wait for apiserver process to appear ...
	I0103 20:18:14.959344   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:14.959384   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:14.959443   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:15.001580   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.001613   62050 cri.go:89] found id: ""
	I0103 20:18:15.001624   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:15.001688   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.005964   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:15.006044   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:15.043364   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:15.043393   62050 cri.go:89] found id: ""
	I0103 20:18:15.043403   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:15.043461   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.047226   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:15.047291   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:15.091700   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.091727   62050 cri.go:89] found id: ""
	I0103 20:18:15.091736   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:15.091794   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.095953   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:15.096038   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:15.132757   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:15.132785   62050 cri.go:89] found id: ""
	I0103 20:18:15.132796   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:15.132856   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.137574   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:15.137637   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:15.174799   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:15.174827   62050 cri.go:89] found id: ""
	I0103 20:18:15.174836   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:15.174893   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.179052   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:15.179119   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:15.218730   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:15.218761   62050 cri.go:89] found id: ""
	I0103 20:18:15.218770   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:15.218829   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.222730   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:15.222796   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:15.265020   62050 cri.go:89] found id: ""
	I0103 20:18:15.265046   62050 logs.go:284] 0 containers: []
	W0103 20:18:15.265053   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:15.265059   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:15.265122   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:15.307032   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:15.307059   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.307065   62050 cri.go:89] found id: ""
	I0103 20:18:15.307074   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:15.307132   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.311275   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.315089   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:15.315113   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:15.361815   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:15.361840   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:15.493913   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:15.493947   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.553841   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:15.553881   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.590885   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:15.590911   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.630332   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:15.630357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:16.074625   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:16.074659   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:16.133116   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:16.133161   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:16.147559   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:16.147585   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:16.199131   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:16.199167   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:16.238085   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:16.238116   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:16.294992   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:16.295032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:16.333862   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:16.333896   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:18.875707   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:18:18.882546   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:18:18.884633   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:18:18.884662   62050 api_server.go:131] duration metric: took 3.925311693s to wait for apiserver health ...
	I0103 20:18:18.884672   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:18.884701   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:18.884765   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:18.922149   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:18.922170   62050 cri.go:89] found id: ""
	I0103 20:18:18.922177   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:18.922223   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.926886   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:18.926952   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:18.970009   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:18.970035   62050 cri.go:89] found id: ""
	I0103 20:18:18.970043   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:18.970107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.974349   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:18.974413   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:19.016970   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:19.016994   62050 cri.go:89] found id: ""
	I0103 20:18:19.017004   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:19.017069   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.021899   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:19.021959   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:19.076044   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:19.076074   62050 cri.go:89] found id: ""
	I0103 20:18:19.076081   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:19.076134   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.081699   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:19.081775   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:19.120022   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:19.120046   62050 cri.go:89] found id: ""
	I0103 20:18:19.120053   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:19.120107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.124627   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:19.124698   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:19.165431   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:19.165453   62050 cri.go:89] found id: ""
	I0103 20:18:19.165463   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:19.165513   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.170214   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:19.170282   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:19.208676   62050 cri.go:89] found id: ""
	I0103 20:18:19.208706   62050 logs.go:284] 0 containers: []
	W0103 20:18:19.208716   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:19.208724   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:19.208782   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:19.246065   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:19.246092   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:19.246101   62050 cri.go:89] found id: ""
	I0103 20:18:19.246109   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:19.246169   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.250217   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.259598   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:19.259628   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:19.643718   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:19.643755   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:19.697873   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:19.697905   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:19.762995   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:19.763030   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:19.830835   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:19.830871   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:19.969465   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:19.969501   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:20.011269   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:20.011301   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:20.059317   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:20.059352   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:20.099428   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:20.099455   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:20.135773   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:20.135809   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:20.149611   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:20.149649   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:20.190742   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:20.190788   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:20.241115   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:20.241142   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:22.789475   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:22.789502   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.789507   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.789512   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.789516   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.789520   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.789527   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.789533   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.789538   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.789544   62050 system_pods.go:74] duration metric: took 3.904866616s to wait for pod list to return data ...
	I0103 20:18:22.789551   62050 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:22.791976   62050 default_sa.go:45] found service account: "default"
	I0103 20:18:22.792000   62050 default_sa.go:55] duration metric: took 2.444229ms for default service account to be created ...
	I0103 20:18:22.792007   62050 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:22.797165   62050 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:22.797186   62050 system_pods.go:89] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.797192   62050 system_pods.go:89] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.797196   62050 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.797200   62050 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.797204   62050 system_pods.go:89] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.797209   62050 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.797221   62050 system_pods.go:89] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.797234   62050 system_pods.go:89] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.797244   62050 system_pods.go:126] duration metric: took 5.231578ms to wait for k8s-apps to be running ...
	I0103 20:18:22.797256   62050 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:22.797303   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:22.811467   62050 system_svc.go:56] duration metric: took 14.201511ms WaitForService to wait for kubelet.
	I0103 20:18:22.811503   62050 kubeadm.go:581] duration metric: took 4m22.446143128s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:22.811533   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:22.814594   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:22.814617   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:22.814629   62050 node_conditions.go:105] duration metric: took 3.089727ms to run NodePressure ...
	I0103 20:18:22.814639   62050 start.go:228] waiting for startup goroutines ...
	I0103 20:18:22.814645   62050 start.go:233] waiting for cluster config update ...
	I0103 20:18:22.814654   62050 start.go:242] writing updated cluster config ...
	I0103 20:18:22.814897   62050 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:22.864761   62050 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:18:22.866755   62050 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018788" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 20:13:01 UTC, ends at Wed 2024-01-03 20:27:15 UTC. --
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.830831701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313635830765476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=0d87e7bd-8912-42f5-9394-099ba8c4ba31 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.834134656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a5586f40-f3c2-4213-ad6c-039d53157072 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.834228457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a5586f40-f3c2-4213-ad6c-039d53157072 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.834401955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704312859463688336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf18f2f2f6b890569cbe272741251b2382ba323933aa17c91e69ebe474026827,PodSandboxId:0fecf732af3d98284f07096a6c2154e8957b91166978fddea56d5eb53d42eb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312840492955145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a560811-1b16-4bbb-98e8-ceb54e9f8bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 50f62300,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a,PodSandboxId:8a53c0c544eaa90f4252f374271277142681ae680d6289fc7b7fdb1fecb3ee6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704312836616376390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rbx58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5e91e6a-e3f9-4dbc-83ff-3069cb67847c,},Annotations:map[string]string{io.kubernetes.container.hash: e0299e54,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1704312829187059443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8,PodSandboxId:11b51934a004f8813caad8f3a521040e3860a408abcaa2879a6b63f2e74666b6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704312829142220806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98fafdf5-9a7
4-4c9f-96eb-20064c72c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: a256ba75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893,PodSandboxId:55ec540cf0bb90a783e1d0e074b925e7c46ab2064403b516c384502e698f9b2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704312822704754636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6babebb750aaa2273bf
3c92e69b421d0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748,PodSandboxId:2abd877507e1eeb17bac598c6306ff7f3ac69dd4f20a886760fc5fcb935418bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704312822572964404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1444165caa04e38cec5c0c2f8cc303e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 67b65f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85,PodSandboxId:bde5c7ca363e1f689fd6148fa640fecaef8f66f4cb296a11287144da436c347b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704312822264978797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ead3d115a92e44f831043fbd0ae0d168,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b,PodSandboxId:abfbbc6cf5b80d8dbcd720a3f338646ccd615c6eddb0388aff645f2244d81145,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704312822104388070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74af6328771f18a2cc89e2cdf431801b,},Annotations:map[string
]string{io.kubernetes.container.hash: 8cf259b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a5586f40-f3c2-4213-ad6c-039d53157072 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.873960535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5ae31e67-7e23-41cd-ab55-bdb71f3612fb name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.874020005Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5ae31e67-7e23-41cd-ab55-bdb71f3612fb name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.875382717Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=97f1df8c-8f7a-4aac-be2b-276d50ca18ce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.875698702Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313635875685743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=97f1df8c-8f7a-4aac-be2b-276d50ca18ce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.876632745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eaa68a3e-f4c3-43ea-b73c-1540b27e2be3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.876684437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eaa68a3e-f4c3-43ea-b73c-1540b27e2be3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.876977368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704312859463688336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf18f2f2f6b890569cbe272741251b2382ba323933aa17c91e69ebe474026827,PodSandboxId:0fecf732af3d98284f07096a6c2154e8957b91166978fddea56d5eb53d42eb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312840492955145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a560811-1b16-4bbb-98e8-ceb54e9f8bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 50f62300,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a,PodSandboxId:8a53c0c544eaa90f4252f374271277142681ae680d6289fc7b7fdb1fecb3ee6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704312836616376390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rbx58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5e91e6a-e3f9-4dbc-83ff-3069cb67847c,},Annotations:map[string]string{io.kubernetes.container.hash: e0299e54,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1704312829187059443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8,PodSandboxId:11b51934a004f8813caad8f3a521040e3860a408abcaa2879a6b63f2e74666b6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704312829142220806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98fafdf5-9a7
4-4c9f-96eb-20064c72c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: a256ba75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893,PodSandboxId:55ec540cf0bb90a783e1d0e074b925e7c46ab2064403b516c384502e698f9b2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704312822704754636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6babebb750aaa2273bf
3c92e69b421d0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748,PodSandboxId:2abd877507e1eeb17bac598c6306ff7f3ac69dd4f20a886760fc5fcb935418bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704312822572964404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1444165caa04e38cec5c0c2f8cc303e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 67b65f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85,PodSandboxId:bde5c7ca363e1f689fd6148fa640fecaef8f66f4cb296a11287144da436c347b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704312822264978797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ead3d115a92e44f831043fbd0ae0d168,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b,PodSandboxId:abfbbc6cf5b80d8dbcd720a3f338646ccd615c6eddb0388aff645f2244d81145,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704312822104388070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74af6328771f18a2cc89e2cdf431801b,},Annotations:map[string
]string{io.kubernetes.container.hash: 8cf259b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eaa68a3e-f4c3-43ea-b73c-1540b27e2be3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.915171265Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=efb35c4f-afdb-4eb0-ad3d-b7b5a9ab56ba name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.915273223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=efb35c4f-afdb-4eb0-ad3d-b7b5a9ab56ba name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.916730030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=25fda61d-bbf8-47e2-a003-c0408e374f3a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.917128575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313635917112273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=25fda61d-bbf8-47e2-a003-c0408e374f3a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.917614448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f4a2cad8-44cf-40e5-b5a4-45f65c3c6bbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.917694273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f4a2cad8-44cf-40e5-b5a4-45f65c3c6bbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.917944387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704312859463688336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf18f2f2f6b890569cbe272741251b2382ba323933aa17c91e69ebe474026827,PodSandboxId:0fecf732af3d98284f07096a6c2154e8957b91166978fddea56d5eb53d42eb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312840492955145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a560811-1b16-4bbb-98e8-ceb54e9f8bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 50f62300,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a,PodSandboxId:8a53c0c544eaa90f4252f374271277142681ae680d6289fc7b7fdb1fecb3ee6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704312836616376390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rbx58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5e91e6a-e3f9-4dbc-83ff-3069cb67847c,},Annotations:map[string]string{io.kubernetes.container.hash: e0299e54,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1704312829187059443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8,PodSandboxId:11b51934a004f8813caad8f3a521040e3860a408abcaa2879a6b63f2e74666b6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704312829142220806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98fafdf5-9a7
4-4c9f-96eb-20064c72c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: a256ba75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893,PodSandboxId:55ec540cf0bb90a783e1d0e074b925e7c46ab2064403b516c384502e698f9b2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704312822704754636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6babebb750aaa2273bf
3c92e69b421d0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748,PodSandboxId:2abd877507e1eeb17bac598c6306ff7f3ac69dd4f20a886760fc5fcb935418bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704312822572964404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1444165caa04e38cec5c0c2f8cc303e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 67b65f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85,PodSandboxId:bde5c7ca363e1f689fd6148fa640fecaef8f66f4cb296a11287144da436c347b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704312822264978797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ead3d115a92e44f831043fbd0ae0d168,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b,PodSandboxId:abfbbc6cf5b80d8dbcd720a3f338646ccd615c6eddb0388aff645f2244d81145,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704312822104388070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74af6328771f18a2cc89e2cdf431801b,},Annotations:map[string
]string{io.kubernetes.container.hash: 8cf259b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f4a2cad8-44cf-40e5-b5a4-45f65c3c6bbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.958737007Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dd6c92d7-f506-4d54-8d69-409dbd62227b name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.958934775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dd6c92d7-f506-4d54-8d69-409dbd62227b name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.960301938Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a11a73fb-c8bf-4837-8ecb-961583603384 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.960768954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313635960752062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a11a73fb-c8bf-4837-8ecb-961583603384 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.961488710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3fc1d081-6b27-4cb9-b093-9f3e8b77acde name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.961573564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3fc1d081-6b27-4cb9-b093-9f3e8b77acde name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:15 no-preload-749210 crio[715]: time="2024-01-03 20:27:15.961869969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704312859463688336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf18f2f2f6b890569cbe272741251b2382ba323933aa17c91e69ebe474026827,PodSandboxId:0fecf732af3d98284f07096a6c2154e8957b91166978fddea56d5eb53d42eb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312840492955145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a560811-1b16-4bbb-98e8-ceb54e9f8bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 50f62300,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a,PodSandboxId:8a53c0c544eaa90f4252f374271277142681ae680d6289fc7b7fdb1fecb3ee6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704312836616376390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rbx58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5e91e6a-e3f9-4dbc-83ff-3069cb67847c,},Annotations:map[string]string{io.kubernetes.container.hash: e0299e54,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1704312829187059443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8,PodSandboxId:11b51934a004f8813caad8f3a521040e3860a408abcaa2879a6b63f2e74666b6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704312829142220806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98fafdf5-9a7
4-4c9f-96eb-20064c72c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: a256ba75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893,PodSandboxId:55ec540cf0bb90a783e1d0e074b925e7c46ab2064403b516c384502e698f9b2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704312822704754636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6babebb750aaa2273bf
3c92e69b421d0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748,PodSandboxId:2abd877507e1eeb17bac598c6306ff7f3ac69dd4f20a886760fc5fcb935418bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704312822572964404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1444165caa04e38cec5c0c2f8cc303e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 67b65f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85,PodSandboxId:bde5c7ca363e1f689fd6148fa640fecaef8f66f4cb296a11287144da436c347b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704312822264978797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ead3d115a92e44f831043fbd0ae0d168,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b,PodSandboxId:abfbbc6cf5b80d8dbcd720a3f338646ccd615c6eddb0388aff645f2244d81145,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704312822104388070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74af6328771f18a2cc89e2cdf431801b,},Annotations:map[string
]string{io.kubernetes.container.hash: 8cf259b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3fc1d081-6b27-4cb9-b093-9f3e8b77acde name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08f95eed823c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   5aa887d440d33       storage-provisioner
	bf18f2f2f6b89       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   0fecf732af3d9       busybox
	b13d0a23b2b29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   8a53c0c544eaa       coredns-76f75df574-rbx58
	367b9549fe5f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   5aa887d440d33       storage-provisioner
	250be399ab1a0       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   11b51934a004f       kube-proxy-5hwf4
	03433af76d74a       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   55ec540cf0bb9       kube-scheduler-no-preload-749210
	f7d2f606bd445       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   2abd877507e1e       etcd-no-preload-749210
	67f470e7e603d       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   bde5c7ca363e1       kube-controller-manager-no-preload-749210
	fb19a70526254       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   abfbbc6cf5b80       kube-apiserver-no-preload-749210
	
	
	==> coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55139 - 62607 "HINFO IN 9055025431400979744.4890078852502409788. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011418444s
	
	
	==> describe nodes <==
	Name:               no-preload-749210
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-749210
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=no-preload-749210
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_05_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:05:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-749210
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:27:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:24:29 +0000   Wed, 03 Jan 2024 20:05:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:24:29 +0000   Wed, 03 Jan 2024 20:05:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:24:29 +0000   Wed, 03 Jan 2024 20:05:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:24:29 +0000   Wed, 03 Jan 2024 20:13:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.245
	  Hostname:    no-preload-749210
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7606b028033543858e648631d2e3789f
	  System UUID:                7606b028-0335-4385-8e64-8631d2e3789f
	  Boot ID:                    e9109145-cffd-42f2-9675-c9d2c4d88f7b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-76f75df574-rbx58                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-749210                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-749210             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-749210    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-5hwf4                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-749210             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-tqn5m              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-749210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-749210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-749210 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node no-preload-749210 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-749210 event: Registered Node no-preload-749210 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-749210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-749210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-749210 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-749210 event: Registered Node no-preload-749210 in Controller
	
	
	==> dmesg <==
	[Jan 3 20:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062282] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.393777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan 3 20:13] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134005] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.455542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.376044] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.123299] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.157223] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.126516] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.246965] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +29.915829] systemd-fstab-generator[1328]: Ignoring "noauto" for root device
	[ +15.023103] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] <==
	{"level":"info","ts":"2024-01-03T20:13:58.171738Z","caller":"traceutil/trace.go:171","msg":"trace[353945637] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"479.54041ms","start":"2024-01-03T20:13:57.692128Z","end":"2024-01-03T20:13:58.171669Z","steps":["trace[353945637] 'process raft request'  (duration: 479.094111ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:58.171908Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:57.692107Z","time spent":"479.752061ms","remote":"127.0.0.1:46828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4427,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/no-preload-749210\" mod_revision:480 > success:<request_put:<key:\"/registry/minions/no-preload-749210\" value_size:4384 >> failure:<request_range:<key:\"/registry/minions/no-preload-749210\" > >"}
	{"level":"warn","ts":"2024-01-03T20:13:59.15818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"464.103076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-749210\" ","response":"range_response_count:1 size:5609"}
	{"level":"info","ts":"2024-01-03T20:13:59.158357Z","caller":"traceutil/trace.go:171","msg":"trace[827098758] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-749210; range_end:; response_count:1; response_revision:566; }","duration":"464.289227ms","start":"2024-01-03T20:13:58.694053Z","end":"2024-01-03T20:13:59.158342Z","steps":["trace[827098758] 'range keys from in-memory index tree'  (duration: 463.99247ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.158467Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:58.694036Z","time spent":"464.42131ms","remote":"127.0.0.1:46830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":5633,"request content":"key:\"/registry/pods/kube-system/etcd-no-preload-749210\" "}
	{"level":"warn","ts":"2024-01-03T20:13:59.15824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"517.883325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.245\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-01-03T20:13:59.158578Z","caller":"traceutil/trace.go:171","msg":"trace[500221479] range","detail":"{range_begin:/registry/masterleases/192.168.61.245; range_end:; response_count:1; response_revision:566; }","duration":"518.220578ms","start":"2024-01-03T20:13:58.640339Z","end":"2024-01-03T20:13:59.15856Z","steps":["trace[500221479] 'range keys from in-memory index tree'  (duration: 517.706306ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.158616Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:58.640326Z","time spent":"518.28023ms","remote":"127.0.0.1:46796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":1,"response size":159,"request content":"key:\"/registry/masterleases/192.168.61.245\" "}
	{"level":"info","ts":"2024-01-03T20:13:59.323846Z","caller":"traceutil/trace.go:171","msg":"trace[1017916715] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"161.617836ms","start":"2024-01-03T20:13:59.162116Z","end":"2024-01-03T20:13:59.323733Z","steps":["trace[1017916715] 'read index received'  (duration: 161.504254ms)","trace[1017916715] 'applied index is now lower than readState.Index'  (duration: 112.845µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:13:59.324009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.896598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-749210\" ","response":"range_response_count:1 size:4441"}
	{"level":"info","ts":"2024-01-03T20:13:59.324053Z","caller":"traceutil/trace.go:171","msg":"trace[1176094791] range","detail":"{range_begin:/registry/minions/no-preload-749210; range_end:; response_count:1; response_revision:566; }","duration":"161.951792ms","start":"2024-01-03T20:13:59.162094Z","end":"2024-01-03T20:13:59.324046Z","steps":["trace[1176094791] 'agreement among raft nodes before linearized reading'  (duration: 161.865333ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.713561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.483423ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3441749369347487512 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.245\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/192.168.61.245\" value_size:67 lease:3441749369347487509 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.245\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-03T20:13:59.714181Z","caller":"traceutil/trace.go:171","msg":"trace[1947085449] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"386.877101ms","start":"2024-01-03T20:13:59.327195Z","end":"2024-01-03T20:13:59.714072Z","steps":["trace[1947085449] 'process raft request'  (duration: 129.436175ms)","trace[1947085449] 'compare'  (duration: 254.931617ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:13:59.714523Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.327179Z","time spent":"387.146444ms","remote":"127.0.0.1:46796","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.245\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/192.168.61.245\" value_size:67 lease:3441749369347487509 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.245\" > >"}
	{"level":"info","ts":"2024-01-03T20:13:59.720767Z","caller":"traceutil/trace.go:171","msg":"trace[1521015019] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:608; }","duration":"385.511712ms","start":"2024-01-03T20:13:59.328434Z","end":"2024-01-03T20:13:59.713946Z","steps":["trace[1521015019] 'read index received'  (duration: 128.136205ms)","trace[1521015019] 'applied index is now lower than readState.Index'  (duration: 257.373615ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:13:59.720369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"391.942232ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-749210\" ","response":"range_response_count:1 size:5609"}
	{"level":"info","ts":"2024-01-03T20:13:59.721422Z","caller":"traceutil/trace.go:171","msg":"trace[1348731087] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-749210; range_end:; response_count:1; response_revision:567; }","duration":"393.002433ms","start":"2024-01-03T20:13:59.328405Z","end":"2024-01-03T20:13:59.721407Z","steps":["trace[1348731087] 'agreement among raft nodes before linearized reading'  (duration: 391.809133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.721466Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.328394Z","time spent":"393.056542ms","remote":"127.0.0.1:46830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":5633,"request content":"key:\"/registry/pods/kube-system/etcd-no-preload-749210\" "}
	{"level":"warn","ts":"2024-01-03T20:13:59.721708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.687489ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2024-01-03T20:13:59.721775Z","caller":"traceutil/trace.go:171","msg":"trace[440588789] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:567; }","duration":"131.756775ms","start":"2024-01-03T20:13:59.590009Z","end":"2024-01-03T20:13:59.721766Z","steps":["trace[440588789] 'agreement among raft nodes before linearized reading'  (duration: 131.637673ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:14:00.099038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.824673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-01-03T20:14:00.099212Z","caller":"traceutil/trace.go:171","msg":"trace[526207413] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:567; }","duration":"285.256686ms","start":"2024-01-03T20:13:59.813936Z","end":"2024-01-03T20:14:00.099192Z","steps":["trace[526207413] 'range keys from in-memory index tree'  (duration: 284.720007ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T20:23:45.980302Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":829}
	{"level":"info","ts":"2024-01-03T20:23:45.983881Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":829,"took":"3.212788ms","hash":1439995688}
	{"level":"info","ts":"2024-01-03T20:23:45.983959Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1439995688,"revision":829,"compact-revision":-1}
	
	
	==> kernel <==
	 20:27:16 up 14 min,  0 users,  load average: 0.48, 0.28, 0.21
	Linux no-preload-749210 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] <==
	I0103 20:21:48.423591       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:23:47.425450       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:23:47.425633       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0103 20:23:48.426513       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:23:48.426634       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:23:48.426679       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:23:48.427688       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:23:48.427877       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:23:48.427923       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:24:48.427695       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:24:48.428051       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:24:48.428151       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:24:48.428119       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:24:48.428390       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:24:48.430015       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:26:48.429280       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:26:48.429455       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:26:48.429478       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:26:48.430382       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:26:48.430557       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:26:48.430592       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] <==
	I0103 20:21:30.913766       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:22:00.586249       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:22:00.926900       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:22:30.591720       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:22:30.936235       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:23:00.596991       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:23:00.953556       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:23:30.602993       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:23:30.962313       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:24:00.609739       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:24:00.971737       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:24:30.615273       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:24:30.981414       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:25:00.625992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:25:00.990685       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:25:06.217167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="263.793µs"
	I0103 20:25:20.210625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="198.202µs"
	E0103 20:25:30.631690       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:25:31.000567       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:26:00.637681       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:26:01.012645       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:26:30.642930       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:26:31.021470       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:27:00.649662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:27:01.029705       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] <==
	I0103 20:13:49.412638       1 server_others.go:72] "Using iptables proxy"
	I0103 20:13:49.454641       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.245"]
	I0103 20:13:49.582896       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0103 20:13:49.582961       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 20:13:49.582990       1 server_others.go:168] "Using iptables Proxier"
	I0103 20:13:49.586341       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 20:13:49.586629       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0103 20:13:49.586893       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:49.588892       1 config.go:188] "Starting service config controller"
	I0103 20:13:49.591750       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 20:13:49.589310       1 config.go:97] "Starting endpoint slice config controller"
	I0103 20:13:49.591916       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 20:13:49.592056       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 20:13:49.590373       1 config.go:315] "Starting node config controller"
	I0103 20:13:49.592166       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 20:13:49.692952       1 shared_informer.go:318] Caches are synced for node config
	I0103 20:13:49.693077       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] <==
	I0103 20:13:44.700343       1 serving.go:380] Generated self-signed cert in-memory
	W0103 20:13:47.298040       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:13:47.298095       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:13:47.298109       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:13:47.298117       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:13:47.429589       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0103 20:13:47.429929       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:47.445288       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:13:47.451123       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:13:47.451198       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:13:47.451462       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:13:47.552507       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 20:13:01 UTC, ends at Wed 2024-01-03 20:27:16 UTC. --
	Jan 03 20:24:41 no-preload-749210 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:24:41 no-preload-749210 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:24:41 no-preload-749210 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:24:53 no-preload-749210 kubelet[1334]: E0103 20:24:53.203231    1334 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 03 20:24:53 no-preload-749210 kubelet[1334]: E0103 20:24:53.203295    1334 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 03 20:24:53 no-preload-749210 kubelet[1334]: E0103 20:24:53.203581    1334 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x9z74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-tqn5m_kube-system(8cc1dc91-fafb-4405-8820-a7f99ccbbb0c): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 03 20:24:53 no-preload-749210 kubelet[1334]: E0103 20:24:53.203633    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:25:06 no-preload-749210 kubelet[1334]: E0103 20:25:06.191323    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:25:20 no-preload-749210 kubelet[1334]: E0103 20:25:20.191990    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:25:31 no-preload-749210 kubelet[1334]: E0103 20:25:31.192759    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:25:41 no-preload-749210 kubelet[1334]: E0103 20:25:41.214870    1334 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:25:41 no-preload-749210 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:25:41 no-preload-749210 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:25:41 no-preload-749210 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:25:43 no-preload-749210 kubelet[1334]: E0103 20:25:43.191983    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:25:54 no-preload-749210 kubelet[1334]: E0103 20:25:54.192482    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:26:09 no-preload-749210 kubelet[1334]: E0103 20:26:09.191888    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:26:22 no-preload-749210 kubelet[1334]: E0103 20:26:22.192219    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:26:34 no-preload-749210 kubelet[1334]: E0103 20:26:34.191732    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:26:41 no-preload-749210 kubelet[1334]: E0103 20:26:41.213268    1334 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:26:41 no-preload-749210 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:26:41 no-preload-749210 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:26:41 no-preload-749210 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:26:49 no-preload-749210 kubelet[1334]: E0103 20:26:49.194632    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:27:04 no-preload-749210 kubelet[1334]: E0103 20:27:04.191661    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	
	
	==> storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] <==
	I0103 20:14:19.587994       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:14:19.600041       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:14:19.600093       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:14:37.004521       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:14:37.007084       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-749210_3a85c888-e7a3-4f6e-8df3-3e4fbcedf466!
	I0103 20:14:37.008071       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80492e63-5321-45f4-a1ba-064f0ee67d00", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-749210_3a85c888-e7a3-4f6e-8df3-3e4fbcedf466 became leader
	I0103 20:14:37.108986       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-749210_3a85c888-e7a3-4f6e-8df3-3e4fbcedf466!
	
	
	==> storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] <==
	I0103 20:13:49.358296       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0103 20:14:19.361429       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-749210 -n no-preload-749210
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-749210 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-tqn5m
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-749210 describe pod metrics-server-57f55c9bc5-tqn5m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-749210 describe pod metrics-server-57f55c9bc5-tqn5m: exit status 1 (64.669657ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-tqn5m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-749210 describe pod metrics-server-57f55c9bc5-tqn5m: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0103 20:18:32.532086   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:19:07.103155   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 20:19:09.451826   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:19:21.013512   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:19:48.942101   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:20:44.056451   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:20:48.653679   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:20:55.307993   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 20:21:11.987248   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:21:30.039393   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:21:42.554399   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:22:11.706175   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:22:27.747628   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:22:53.087203   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:23:05.601789   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-03 20:27:23.43389313 +0000 UTC m=+5408.906470124
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-018788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-018788 logs -n 25: (1.602375898s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-719541 sudo cat                              | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo find                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo crio                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-719541                                       | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-350596 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | disable-driver-mounts-350596                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:06 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-927922        | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451331            | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-749210             | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018788  | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-927922             | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC | 03 Jan 24 20:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451331                 | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC | 03 Jan 24 20:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-749210                  | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018788       | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:09:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:09:05.502375   62050 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:09:05.502548   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502558   62050 out.go:309] Setting ErrFile to fd 2...
	I0103 20:09:05.502566   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502759   62050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:09:05.503330   62050 out.go:303] Setting JSON to false
	I0103 20:09:05.504222   62050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6693,"bootTime":1704305853,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 20:09:05.504283   62050 start.go:138] virtualization: kvm guest
	I0103 20:09:05.507002   62050 out.go:177] * [default-k8s-diff-port-018788] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 20:09:05.508642   62050 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:09:05.508667   62050 notify.go:220] Checking for updates...
	I0103 20:09:05.510296   62050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:09:05.511927   62050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:09:05.513487   62050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:09:05.515064   62050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 20:09:05.516515   62050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:09:05.518301   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:09:05.518774   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.518827   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.533730   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0103 20:09:05.534098   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.534667   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.534699   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.535027   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.535298   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.535543   62050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:09:05.535823   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.535855   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.549808   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I0103 20:09:05.550147   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.550708   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.550733   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.551041   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.551258   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.583981   62050 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 20:09:05.585560   62050 start.go:298] selected driver: kvm2
	I0103 20:09:05.585580   62050 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.585707   62050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:09:05.586411   62050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.586494   62050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 20:09:05.601346   62050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 20:09:05.601747   62050 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 20:09:05.601812   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:09:05.601828   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:09:05.601839   62050 start_flags.go:323] config:
	{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-01878
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.602011   62050 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.604007   62050 out.go:177] * Starting control plane node default-k8s-diff-port-018788 in cluster default-k8s-diff-port-018788
	I0103 20:09:03.174819   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:06.246788   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:04.840696   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:09:04.840826   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:09:04.840950   62015 cache.go:107] acquiring lock: {Name:mk76774936d94ce826f83ee0faaaf3557831e6bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.840994   62015 cache.go:107] acquiring lock: {Name:mk25b47a2b083e99837dbc206b0832b20d7da669 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841017   62015 cache.go:107] acquiring lock: {Name:mk0a26120b5274bc796f1ae286da54dda262a5a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841059   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0103 20:09:04.841064   62015 start.go:365] acquiring machines lock for no-preload-749210: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:04.841070   62015 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 128.344µs
	I0103 20:09:04.841078   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0103 20:09:04.841081   62015 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841085   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0103 20:09:04.840951   62015 cache.go:107] acquiring lock: {Name:mk372d2259ddc4c784d2a14a7416ba9b749d6f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841089   62015 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 97.811µs
	I0103 20:09:04.841093   62015 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 87.964µs
	I0103 20:09:04.841108   62015 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0103 20:09:04.841109   62015 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0103 20:09:04.841115   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 20:09:04.841052   62015 cache.go:107] acquiring lock: {Name:mk04d21d7cdef9332755ef804a44022ba9c4a8c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841129   62015 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 185.143µs
	I0103 20:09:04.841155   62015 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 20:09:04.841139   62015 cache.go:107] acquiring lock: {Name:mk5c34e1c9b00efde01e776962411ad1105596ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841183   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0103 20:09:04.841203   62015 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 176.832µs
	I0103 20:09:04.841212   62015 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0103 20:09:04.841400   62015 cache.go:107] acquiring lock: {Name:mk0ae9e390d74a93289bc4e45b5511dce57beeb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841216   62015 cache.go:107] acquiring lock: {Name:mkccb08ee6224be0e6786052f4bebc8d21ec8a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841614   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0103 20:09:04.841633   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0103 20:09:04.841675   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0103 20:09:04.841679   62015 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 497.325µs
	I0103 20:09:04.841672   62015 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 557.891µs
	I0103 20:09:04.841716   62015 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841696   62015 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 499.205µs
	I0103 20:09:04.841745   62015 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841706   62015 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841755   62015 cache.go:87] Successfully saved all images to host disk.
	I0103 20:09:05.605517   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:09:05.605574   62050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 20:09:05.605590   62050 cache.go:56] Caching tarball of preloaded images
	I0103 20:09:05.605669   62050 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 20:09:05.605681   62050 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:09:05.605787   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:09:05.605973   62050 start.go:365] acquiring machines lock for default-k8s-diff-port-018788: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:12.326805   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:15.398807   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:21.478760   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:24.550821   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:30.630841   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:33.702766   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:39.782732   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:42.854926   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:48.934815   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:52.006845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:58.086804   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:01.158903   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:07.238808   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:10.310897   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:16.390869   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:19.462833   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:25.542866   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:28.614753   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:34.694867   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:37.766876   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:43.846838   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:46.918843   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:52.998853   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:56.070822   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:02.150825   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:05.222884   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:11.302787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:14.374818   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:20.454810   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:23.526899   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:29.606842   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:32.678789   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:38.758787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:41.830855   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:47.910801   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:50.982868   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:57.062889   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:00.134834   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:06.214856   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:09.286845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:15.366787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:18.438756   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:24.518814   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:27.590887   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:30.594981   61676 start.go:369] acquired machines lock for "embed-certs-451331" in 3m56.986277612s
	I0103 20:12:30.595030   61676 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:30.595039   61676 fix.go:54] fixHost starting: 
	I0103 20:12:30.595434   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:30.595466   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:30.609917   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0103 20:12:30.610302   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:30.610819   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:12:30.610845   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:30.611166   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:30.611348   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:30.611486   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:12:30.613108   61676 fix.go:102] recreateIfNeeded on embed-certs-451331: state=Stopped err=<nil>
	I0103 20:12:30.613128   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	W0103 20:12:30.613291   61676 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:30.615194   61676 out.go:177] * Restarting existing kvm2 VM for "embed-certs-451331" ...
	I0103 20:12:30.592855   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:30.592889   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:12:30.594843   61400 machine.go:91] provisioned docker machine in 4m37.406324683s
	I0103 20:12:30.594886   61400 fix.go:56] fixHost completed within 4m37.42774841s
	I0103 20:12:30.594892   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 4m37.427764519s
	W0103 20:12:30.594913   61400 start.go:694] error starting host: provision: host is not running
	W0103 20:12:30.595005   61400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0103 20:12:30.595014   61400 start.go:709] Will try again in 5 seconds ...
	I0103 20:12:30.616365   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Start
	I0103 20:12:30.616513   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring networks are active...
	I0103 20:12:30.617380   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network default is active
	I0103 20:12:30.617718   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network mk-embed-certs-451331 is active
	I0103 20:12:30.618103   61676 main.go:141] libmachine: (embed-certs-451331) Getting domain xml...
	I0103 20:12:30.618735   61676 main.go:141] libmachine: (embed-certs-451331) Creating domain...
	I0103 20:12:31.839751   61676 main.go:141] libmachine: (embed-certs-451331) Waiting to get IP...
	I0103 20:12:31.840608   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:31.841035   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:31.841117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:31.841008   62575 retry.go:31] will retry after 303.323061ms: waiting for machine to come up
	I0103 20:12:32.146508   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.147005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.147037   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.146950   62575 retry.go:31] will retry after 240.92709ms: waiting for machine to come up
	I0103 20:12:32.389487   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.389931   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.389962   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.389887   62575 retry.go:31] will retry after 473.263026ms: waiting for machine to come up
	I0103 20:12:32.864624   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.865060   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.865082   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.865006   62575 retry.go:31] will retry after 473.373684ms: waiting for machine to come up
	I0103 20:12:33.339691   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.340156   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.340189   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.340098   62575 retry.go:31] will retry after 639.850669ms: waiting for machine to come up
	I0103 20:12:35.596669   61400 start.go:365] acquiring machines lock for old-k8s-version-927922: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:12:33.982104   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.982622   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.982655   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.982583   62575 retry.go:31] will retry after 589.282725ms: waiting for machine to come up
	I0103 20:12:34.573280   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:34.573692   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:34.573716   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:34.573639   62575 retry.go:31] will retry after 884.387817ms: waiting for machine to come up
	I0103 20:12:35.459819   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:35.460233   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:35.460287   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:35.460168   62575 retry.go:31] will retry after 1.326571684s: waiting for machine to come up
	I0103 20:12:36.788923   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:36.789429   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:36.789452   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:36.789395   62575 retry.go:31] will retry after 1.436230248s: waiting for machine to come up
	I0103 20:12:38.227994   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:38.228374   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:38.228397   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:38.228336   62575 retry.go:31] will retry after 2.127693351s: waiting for machine to come up
	I0103 20:12:40.358485   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:40.358968   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:40.358998   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:40.358912   62575 retry.go:31] will retry after 1.816116886s: waiting for machine to come up
	I0103 20:12:42.177782   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:42.178359   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:42.178390   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:42.178296   62575 retry.go:31] will retry after 3.199797073s: waiting for machine to come up
	I0103 20:12:45.381712   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:45.382053   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:45.382075   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:45.381991   62575 retry.go:31] will retry after 3.573315393s: waiting for machine to come up
	I0103 20:12:50.159164   62015 start.go:369] acquired machines lock for "no-preload-749210" in 3m45.318070652s
	I0103 20:12:50.159226   62015 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:50.159235   62015 fix.go:54] fixHost starting: 
	I0103 20:12:50.159649   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:50.159688   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:50.176573   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0103 20:12:50.176998   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:50.177504   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:12:50.177529   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:50.177925   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:50.178125   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:12:50.178297   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:12:50.179850   62015 fix.go:102] recreateIfNeeded on no-preload-749210: state=Stopped err=<nil>
	I0103 20:12:50.179873   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	W0103 20:12:50.180066   62015 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:50.182450   62015 out.go:177] * Restarting existing kvm2 VM for "no-preload-749210" ...
	I0103 20:12:48.959159   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.959637   61676 main.go:141] libmachine: (embed-certs-451331) Found IP for machine: 192.168.50.197
	I0103 20:12:48.959655   61676 main.go:141] libmachine: (embed-certs-451331) Reserving static IP address...
	I0103 20:12:48.959666   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has current primary IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.960051   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.960073   61676 main.go:141] libmachine: (embed-certs-451331) DBG | skip adding static IP to network mk-embed-certs-451331 - found existing host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"}
	I0103 20:12:48.960086   61676 main.go:141] libmachine: (embed-certs-451331) Reserved static IP address: 192.168.50.197
	I0103 20:12:48.960101   61676 main.go:141] libmachine: (embed-certs-451331) Waiting for SSH to be available...
	I0103 20:12:48.960117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Getting to WaitForSSH function...
	I0103 20:12:48.962160   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962443   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.962478   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962611   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH client type: external
	I0103 20:12:48.962631   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa (-rw-------)
	I0103 20:12:48.962661   61676 main.go:141] libmachine: (embed-certs-451331) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:12:48.962681   61676 main.go:141] libmachine: (embed-certs-451331) DBG | About to run SSH command:
	I0103 20:12:48.962718   61676 main.go:141] libmachine: (embed-certs-451331) DBG | exit 0
	I0103 20:12:49.058790   61676 main.go:141] libmachine: (embed-certs-451331) DBG | SSH cmd err, output: <nil>: 
	I0103 20:12:49.059176   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetConfigRaw
	I0103 20:12:49.059838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.062025   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062407   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.062440   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062697   61676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/config.json ...
	I0103 20:12:49.062878   61676 machine.go:88] provisioning docker machine ...
	I0103 20:12:49.062894   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.063097   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063258   61676 buildroot.go:166] provisioning hostname "embed-certs-451331"
	I0103 20:12:49.063278   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063423   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.065735   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066121   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.066161   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066328   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.066507   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066695   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066860   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.067065   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.067455   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.067469   61676 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-451331 && echo "embed-certs-451331" | sudo tee /etc/hostname
	I0103 20:12:49.210431   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-451331
	
	I0103 20:12:49.210465   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.213162   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213503   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.213573   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213682   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.213911   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214094   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214289   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.214449   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.214837   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.214856   61676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-451331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-451331/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-451331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:12:49.350098   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:49.350134   61676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:12:49.350158   61676 buildroot.go:174] setting up certificates
	I0103 20:12:49.350172   61676 provision.go:83] configureAuth start
	I0103 20:12:49.350188   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.350497   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.352947   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353356   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.353387   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353448   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.355701   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.356033   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356183   61676 provision.go:138] copyHostCerts
	I0103 20:12:49.356241   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:12:49.356254   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:12:49.356322   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:12:49.356413   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:12:49.356421   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:12:49.356446   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:12:49.356506   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:12:49.356513   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:12:49.356535   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:12:49.356587   61676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.embed-certs-451331 san=[192.168.50.197 192.168.50.197 localhost 127.0.0.1 minikube embed-certs-451331]
	I0103 20:12:49.413721   61676 provision.go:172] copyRemoteCerts
	I0103 20:12:49.413781   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:12:49.413804   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.416658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417143   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.417170   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417420   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.417617   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.417814   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.417977   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.510884   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:12:49.533465   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0103 20:12:49.554895   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:12:49.576069   61676 provision.go:86] duration metric: configureAuth took 225.882364ms
	I0103 20:12:49.576094   61676 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:12:49.576310   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:12:49.576387   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.579119   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579413   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.579461   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.579780   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.579968   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.580121   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.580271   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.580591   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.580615   61676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:12:49.883159   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:12:49.883188   61676 machine.go:91] provisioned docker machine in 820.299871ms
	I0103 20:12:49.883199   61676 start.go:300] post-start starting for "embed-certs-451331" (driver="kvm2")
	I0103 20:12:49.883212   61676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:12:49.883239   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.883565   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:12:49.883599   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.886365   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.886695   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886878   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.887091   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.887293   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.887468   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.985529   61676 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:12:49.989732   61676 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:12:49.989758   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:12:49.989820   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:12:49.989891   61676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:12:49.989981   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:12:49.999882   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:50.022936   61676 start.go:303] post-start completed in 139.710189ms
	I0103 20:12:50.022966   61676 fix.go:56] fixHost completed within 19.427926379s
	I0103 20:12:50.023002   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.025667   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.025940   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.025973   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.026212   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.026424   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.027074   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:50.027381   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:50.027393   61676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:12:50.159031   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312770.110466062
	
	I0103 20:12:50.159053   61676 fix.go:206] guest clock: 1704312770.110466062
	I0103 20:12:50.159061   61676 fix.go:219] Guest: 2024-01-03 20:12:50.110466062 +0000 UTC Remote: 2024-01-03 20:12:50.022969488 +0000 UTC m=+256.568741537 (delta=87.496574ms)
	I0103 20:12:50.159083   61676 fix.go:190] guest clock delta is within tolerance: 87.496574ms
	I0103 20:12:50.159089   61676 start.go:83] releasing machines lock for "embed-certs-451331", held for 19.564082089s
	I0103 20:12:50.159117   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.159421   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:50.162216   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162550   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.162577   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162762   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163248   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163433   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163532   61676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:12:50.163583   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.163644   61676 ssh_runner.go:195] Run: cat /version.json
	I0103 20:12:50.163671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.166588   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166753   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166957   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.166987   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167192   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167329   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.167358   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167362   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167500   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.167590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167684   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.167761   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167905   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.168096   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.298482   61676 ssh_runner.go:195] Run: systemctl --version
	I0103 20:12:50.304252   61676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:12:50.442709   61676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:12:50.448879   61676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:12:50.448959   61676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:12:50.467183   61676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:12:50.467203   61676 start.go:475] detecting cgroup driver to use...
	I0103 20:12:50.467269   61676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:12:50.482438   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:12:50.493931   61676 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:12:50.493997   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:12:50.506860   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:12:50.519279   61676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:12:50.627391   61676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:12:50.748160   61676 docker.go:219] disabling docker service ...
	I0103 20:12:50.748220   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:12:50.760970   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:12:50.772252   61676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:12:50.889707   61676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:12:51.003794   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:12:51.016226   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:12:51.032543   61676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:12:51.032600   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.042477   61676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:12:51.042559   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.053103   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.063469   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.073912   61676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:12:51.083314   61676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:12:51.092920   61676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:12:51.092969   61676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:12:51.106690   61676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:12:51.115815   61676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:12:51.230139   61676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:12:51.413184   61676 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:12:51.413315   61676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:12:51.417926   61676 start.go:543] Will wait 60s for crictl version
	I0103 20:12:51.417988   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:12:51.421507   61676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:12:51.465370   61676 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:12:51.465453   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.519590   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.582633   61676 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:12:51.583888   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:51.587068   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587442   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:51.587486   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587724   61676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0103 20:12:51.591798   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:51.602798   61676 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:12:51.602871   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:51.641736   61676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:12:51.641799   61676 ssh_runner.go:195] Run: which lz4
	I0103 20:12:51.645386   61676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:12:51.649168   61676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:12:51.649196   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:12:53.428537   61676 crio.go:444] Took 1.783185 seconds to copy over tarball
	I0103 20:12:53.428601   61676 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:12:50.183891   62015 main.go:141] libmachine: (no-preload-749210) Calling .Start
	I0103 20:12:50.184083   62015 main.go:141] libmachine: (no-preload-749210) Ensuring networks are active...
	I0103 20:12:50.184749   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network default is active
	I0103 20:12:50.185084   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network mk-no-preload-749210 is active
	I0103 20:12:50.185435   62015 main.go:141] libmachine: (no-preload-749210) Getting domain xml...
	I0103 20:12:50.186067   62015 main.go:141] libmachine: (no-preload-749210) Creating domain...
	I0103 20:12:51.468267   62015 main.go:141] libmachine: (no-preload-749210) Waiting to get IP...
	I0103 20:12:51.469108   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.469584   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.469664   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.469570   62702 retry.go:31] will retry after 254.191618ms: waiting for machine to come up
	I0103 20:12:51.724958   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.725657   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.725683   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.725609   62702 retry.go:31] will retry after 279.489548ms: waiting for machine to come up
	I0103 20:12:52.007176   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.007682   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.007713   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.007628   62702 retry.go:31] will retry after 422.96552ms: waiting for machine to come up
	I0103 20:12:52.432345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.432873   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.432912   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.432844   62702 retry.go:31] will retry after 561.295375ms: waiting for machine to come up
	I0103 20:12:52.995438   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.995929   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.995963   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.995878   62702 retry.go:31] will retry after 547.962782ms: waiting for machine to come up
	I0103 20:12:53.545924   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:53.546473   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:53.546558   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:53.546453   62702 retry.go:31] will retry after 927.631327ms: waiting for machine to come up
	I0103 20:12:54.475549   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:54.476000   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:54.476046   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:54.475945   62702 retry.go:31] will retry after 880.192703ms: waiting for machine to come up
	I0103 20:12:56.224357   61676 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.795734066s)
	I0103 20:12:56.224386   61676 crio.go:451] Took 2.795820 seconds to extract the tarball
	I0103 20:12:56.224406   61676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:12:56.266955   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:56.318766   61676 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:12:56.318789   61676 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:12:56.318871   61676 ssh_runner.go:195] Run: crio config
	I0103 20:12:56.378376   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:12:56.378401   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:12:56.378423   61676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:12:56.378451   61676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.197 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-451331 NodeName:embed-certs-451331 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:12:56.378619   61676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-451331"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:12:56.378714   61676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-451331 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:12:56.378777   61676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:12:56.387967   61676 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:12:56.388037   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:12:56.396000   61676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0103 20:12:56.411880   61676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:12:56.427567   61676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0103 20:12:56.443342   61676 ssh_runner.go:195] Run: grep 192.168.50.197	control-plane.minikube.internal$ /etc/hosts
	I0103 20:12:56.446991   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:56.458659   61676 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331 for IP: 192.168.50.197
	I0103 20:12:56.458696   61676 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:12:56.458844   61676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:12:56.458904   61676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:12:56.459010   61676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/client.key
	I0103 20:12:56.459092   61676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key.d719e12a
	I0103 20:12:56.459159   61676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key
	I0103 20:12:56.459299   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:12:56.459341   61676 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:12:56.459358   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:12:56.459400   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:12:56.459434   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:12:56.459466   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:12:56.459522   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:56.460408   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:12:56.481997   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:12:56.504016   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:12:56.526477   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:12:56.548471   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:12:56.570763   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:12:56.592910   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:12:56.617765   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:12:56.646025   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:12:56.668629   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:12:56.690927   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:12:56.712067   61676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:12:56.727773   61676 ssh_runner.go:195] Run: openssl version
	I0103 20:12:56.733000   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:12:56.742921   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747499   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747562   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.752732   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:12:56.762510   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:12:56.772401   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777123   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777180   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.782490   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:12:56.793745   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:12:56.805156   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809897   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809954   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.815432   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:12:56.826498   61676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:12:56.831012   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:12:56.837150   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:12:56.843256   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:12:56.849182   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:12:56.854882   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:12:56.862018   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:12:56.867863   61676 kubeadm.go:404] StartCluster: {Name:embed-certs-451331 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:12:56.867982   61676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:12:56.868029   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:12:56.909417   61676 cri.go:89] found id: ""
	I0103 20:12:56.909523   61676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:12:56.919487   61676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:12:56.919515   61676 kubeadm.go:636] restartCluster start
	I0103 20:12:56.919568   61676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:12:56.929137   61676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:56.930326   61676 kubeconfig.go:92] found "embed-certs-451331" server: "https://192.168.50.197:8443"
	I0103 20:12:56.932682   61676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:12:56.941846   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:56.941909   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:56.953616   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.442188   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.442281   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.458303   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.942905   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.942988   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.955860   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:58.442326   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.442420   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.454294   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:55.357897   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:55.358462   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:55.358492   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:55.358429   62702 retry.go:31] will retry after 1.158958207s: waiting for machine to come up
	I0103 20:12:56.518837   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:56.519260   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:56.519306   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:56.519224   62702 retry.go:31] will retry after 1.620553071s: waiting for machine to come up
	I0103 20:12:58.141980   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:58.142505   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:58.142549   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:58.142454   62702 retry.go:31] will retry after 1.525068593s: waiting for machine to come up
	I0103 20:12:59.670380   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:59.670880   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:59.670909   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:59.670827   62702 retry.go:31] will retry after 1.772431181s: waiting for machine to come up
	I0103 20:12:58.942887   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.942975   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.956781   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.442313   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.442402   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.455837   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.942355   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.942439   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.954326   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.441870   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.441960   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.454004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.941882   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.941995   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.958004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.442573   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.442664   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.458604   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.942062   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.942170   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.958396   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.442928   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.443027   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.456612   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.941943   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.942056   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.953939   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:03.442552   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.442633   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.454840   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.445221   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:01.445608   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:01.445647   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:01.445565   62702 retry.go:31] will retry after 2.830747633s: waiting for machine to come up
	I0103 20:13:04.279514   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:04.279996   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:04.280020   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:04.279963   62702 retry.go:31] will retry after 4.03880385s: waiting for machine to come up
	I0103 20:13:03.942687   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.942774   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.954714   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.442265   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.442357   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.454216   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.942877   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.942952   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.954944   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.442467   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.442596   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.454305   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.942383   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.942468   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.954074   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.442723   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.442811   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.454629   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.942200   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.942283   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.953799   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.953829   61676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:06.953836   61676 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:06.953845   61676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:06.953904   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:06.989109   61676 cri.go:89] found id: ""
	I0103 20:13:06.989214   61676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:07.004822   61676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:07.014393   61676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:07.014454   61676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023669   61676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023691   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.139277   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.626388   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.814648   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.901750   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.962623   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:07.962710   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.463820   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.322801   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323160   62015 main.go:141] libmachine: (no-preload-749210) Found IP for machine: 192.168.61.245
	I0103 20:13:08.323203   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has current primary IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323222   62015 main.go:141] libmachine: (no-preload-749210) Reserving static IP address...
	I0103 20:13:08.323600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.323632   62015 main.go:141] libmachine: (no-preload-749210) Reserved static IP address: 192.168.61.245
	I0103 20:13:08.323664   62015 main.go:141] libmachine: (no-preload-749210) DBG | skip adding static IP to network mk-no-preload-749210 - found existing host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"}
	I0103 20:13:08.323684   62015 main.go:141] libmachine: (no-preload-749210) DBG | Getting to WaitForSSH function...
	I0103 20:13:08.323698   62015 main.go:141] libmachine: (no-preload-749210) Waiting for SSH to be available...
	I0103 20:13:08.325529   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325831   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.325863   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325949   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH client type: external
	I0103 20:13:08.325977   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa (-rw-------)
	I0103 20:13:08.326013   62015 main.go:141] libmachine: (no-preload-749210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:08.326030   62015 main.go:141] libmachine: (no-preload-749210) DBG | About to run SSH command:
	I0103 20:13:08.326053   62015 main.go:141] libmachine: (no-preload-749210) DBG | exit 0
	I0103 20:13:08.418368   62015 main.go:141] libmachine: (no-preload-749210) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:08.418718   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetConfigRaw
	I0103 20:13:08.419464   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.421838   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422172   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.422199   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422460   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:13:08.422680   62015 machine.go:88] provisioning docker machine ...
	I0103 20:13:08.422702   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:08.422883   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423027   62015 buildroot.go:166] provisioning hostname "no-preload-749210"
	I0103 20:13:08.423047   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423153   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.425105   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425377   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.425408   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425583   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.425734   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425869   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425987   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.426160   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.426488   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.426501   62015 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-749210 && echo "no-preload-749210" | sudo tee /etc/hostname
	I0103 20:13:08.579862   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-749210
	
	I0103 20:13:08.579892   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.583166   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.583635   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583828   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.584039   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584225   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584391   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.584593   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.584928   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.584954   62015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-749210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-749210/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-749210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:08.729661   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:08.729697   62015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:08.729738   62015 buildroot.go:174] setting up certificates
	I0103 20:13:08.729759   62015 provision.go:83] configureAuth start
	I0103 20:13:08.729776   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.730101   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.733282   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733694   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.733728   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733868   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.736223   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736557   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.736589   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736763   62015 provision.go:138] copyHostCerts
	I0103 20:13:08.736830   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:08.736847   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:08.736913   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:08.737035   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:08.737047   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:08.737077   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:08.737177   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:08.737188   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:08.737218   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:08.737295   62015 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.no-preload-749210 san=[192.168.61.245 192.168.61.245 localhost 127.0.0.1 minikube no-preload-749210]
	I0103 20:13:09.018604   62015 provision.go:172] copyRemoteCerts
	I0103 20:13:09.018662   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:09.018684   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.021339   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021729   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.021777   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.022068   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.022220   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.022405   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.120023   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0103 20:13:09.143242   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:09.166206   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:09.192425   62015 provision.go:86] duration metric: configureAuth took 462.649611ms
	I0103 20:13:09.192457   62015 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:09.192678   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:09.192770   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.195193   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195594   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.195633   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.196100   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196272   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196437   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.196637   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.197028   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.197048   62015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:09.528890   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:09.528915   62015 machine.go:91] provisioned docker machine in 1.106221183s
	I0103 20:13:09.528924   62015 start.go:300] post-start starting for "no-preload-749210" (driver="kvm2")
	I0103 20:13:09.528949   62015 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:09.528966   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.529337   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:09.529372   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.532679   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533032   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.533063   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533262   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.533490   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.533675   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.533841   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.632949   62015 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:09.638382   62015 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:09.638421   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:09.638502   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:09.638617   62015 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:09.638744   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:09.650407   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:09.672528   62015 start.go:303] post-start completed in 143.577643ms
	I0103 20:13:09.672560   62015 fix.go:56] fixHost completed within 19.513324819s
	I0103 20:13:09.672585   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.675037   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675398   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.675430   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675587   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.675811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.675963   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.676112   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.676294   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.676674   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.676690   62015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:09.811720   62050 start.go:369] acquired machines lock for "default-k8s-diff-port-018788" in 4m4.205717121s
	I0103 20:13:09.811786   62050 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:09.811797   62050 fix.go:54] fixHost starting: 
	I0103 20:13:09.812213   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:09.812257   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:09.831972   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0103 20:13:09.832420   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:09.832973   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:13:09.833004   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:09.833345   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:09.833505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:09.833637   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:13:09.835476   62050 fix.go:102] recreateIfNeeded on default-k8s-diff-port-018788: state=Stopped err=<nil>
	I0103 20:13:09.835520   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	W0103 20:13:09.835689   62050 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:09.837499   62050 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018788" ...
	I0103 20:13:09.838938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Start
	I0103 20:13:09.839117   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring networks are active...
	I0103 20:13:09.839888   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network default is active
	I0103 20:13:09.840347   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network mk-default-k8s-diff-port-018788 is active
	I0103 20:13:09.840765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Getting domain xml...
	I0103 20:13:09.841599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Creating domain...
	I0103 20:13:09.811571   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312789.764323206
	
	I0103 20:13:09.811601   62015 fix.go:206] guest clock: 1704312789.764323206
	I0103 20:13:09.811611   62015 fix.go:219] Guest: 2024-01-03 20:13:09.764323206 +0000 UTC Remote: 2024-01-03 20:13:09.672564299 +0000 UTC m=+244.986151230 (delta=91.758907ms)
	I0103 20:13:09.811636   62015 fix.go:190] guest clock delta is within tolerance: 91.758907ms
	I0103 20:13:09.811642   62015 start.go:83] releasing machines lock for "no-preload-749210", held for 19.652439302s
	I0103 20:13:09.811678   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.811949   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:09.815012   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815391   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.815429   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815641   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816177   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816363   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816471   62015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:09.816509   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.816620   62015 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:09.816646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.819652   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.819909   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820058   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820088   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820319   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820377   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820581   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820753   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.820822   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820910   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.821007   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.821131   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.949119   62015 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:09.956247   62015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:10.116715   62015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:10.122512   62015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:10.122640   62015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:10.142239   62015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:10.142265   62015 start.go:475] detecting cgroup driver to use...
	I0103 20:13:10.142336   62015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:10.159473   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:10.175492   62015 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:10.175555   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:10.191974   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:10.208639   62015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:10.343228   62015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:10.457642   62015 docker.go:219] disabling docker service ...
	I0103 20:13:10.457720   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:10.475117   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:10.491265   62015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:10.613064   62015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:10.741969   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:10.755923   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:10.775483   62015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:10.775550   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.785489   62015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:10.785557   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.795303   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.804763   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.814559   62015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:10.824431   62015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:10.833193   62015 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:10.833273   62015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:10.850446   62015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:10.861775   62015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:11.021577   62015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:11.217675   62015 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:11.217748   62015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:11.222475   62015 start.go:543] Will wait 60s for crictl version
	I0103 20:13:11.222552   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.226128   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:11.266681   62015 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:11.266775   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.313142   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.358396   62015 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0103 20:13:08.963472   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.462836   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.963771   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.991718   61676 api_server.go:72] duration metric: took 2.029094062s to wait for apiserver process to appear ...
	I0103 20:13:09.991748   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:09.991769   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:09.992264   61676 api_server.go:269] stopped: https://192.168.50.197:8443/healthz: Get "https://192.168.50.197:8443/healthz": dial tcp 192.168.50.197:8443: connect: connection refused
	I0103 20:13:10.491803   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:11.359808   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:11.363074   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363434   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:11.363465   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363695   62015 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:11.367689   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:11.378693   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:13:11.378746   62015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:11.416544   62015 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0103 20:13:11.416570   62015 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:11.416642   62015 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.416698   62015 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.416724   62015 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.416699   62015 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.416929   62015 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0103 20:13:11.416671   62015 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.417054   62015 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.417093   62015 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.418600   62015 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.418621   62015 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.418630   62015 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0103 20:13:11.418646   62015 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.418661   62015 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.418675   62015 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.418685   62015 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.418697   62015 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.635223   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.662007   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.668522   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.671471   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.672069   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.685216   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0103 20:13:11.687462   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.716775   62015 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0103 20:13:11.716825   62015 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.716882   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.762358   62015 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0103 20:13:11.762394   62015 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.762463   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846225   62015 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0103 20:13:11.846268   62015 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.846317   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846432   62015 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0103 20:13:11.846473   62015 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.846529   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846515   62015 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0103 20:13:11.846655   62015 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.846711   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956577   62015 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0103 20:13:11.956659   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.956689   62015 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.956746   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.956760   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956782   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.956820   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.956873   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:12.064715   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:12.064764   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0103 20:13:12.064720   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.064856   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:12.064903   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.068647   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068685   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068752   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068767   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.068771   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068841   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.077600   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0103 20:13:12.077622   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077682   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077798   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0103 20:13:12.109729   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109778   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109838   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109927   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0103 20:13:12.110020   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:12.237011   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279507   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.201800359s)
	I0103 20:13:14.279592   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0103 20:13:14.279606   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.169553787s)
	I0103 20:13:14.279641   62015 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279646   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0103 20:13:14.279645   62015 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.042604307s)
	I0103 20:13:14.279725   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279726   62015 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0103 20:13:14.279760   62015 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279802   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:14.285860   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.246503   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting to get IP...
	I0103 20:13:11.247669   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248203   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248301   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.248165   62835 retry.go:31] will retry after 292.358185ms: waiting for machine to come up
	I0103 20:13:11.541836   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542224   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.542168   62835 retry.go:31] will retry after 370.634511ms: waiting for machine to come up
	I0103 20:13:11.914890   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915372   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915403   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.915330   62835 retry.go:31] will retry after 304.80922ms: waiting for machine to come up
	I0103 20:13:12.221826   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222289   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.222232   62835 retry.go:31] will retry after 534.177843ms: waiting for machine to come up
	I0103 20:13:12.757904   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758422   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.758334   62835 retry.go:31] will retry after 749.166369ms: waiting for machine to come up
	I0103 20:13:13.509343   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:13.509854   62835 retry.go:31] will retry after 716.215015ms: waiting for machine to come up
	I0103 20:13:14.227886   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228388   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228414   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:14.228338   62835 retry.go:31] will retry after 1.095458606s: waiting for machine to come up
	I0103 20:13:15.324880   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325299   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325332   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:15.325250   62835 retry.go:31] will retry after 1.266878415s: waiting for machine to come up
	I0103 20:13:14.427035   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.427077   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.427119   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.462068   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.462115   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.492283   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.500354   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.500391   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:14.991910   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.997522   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.997550   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.492157   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:15.500340   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:15.500377   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.992158   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:16.002940   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:13:16.020171   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:16.020205   61676 api_server.go:131] duration metric: took 6.028448633s to wait for apiserver health ...
	I0103 20:13:16.020216   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:13:16.020226   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:16.022596   61676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:16.024514   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:16.064582   61676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:16.113727   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:16.124984   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:16.125031   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:16.125044   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:16.125061   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:16.125072   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:16.125086   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:16.125097   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:16.125111   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:16.125125   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:16.125140   61676 system_pods.go:74] duration metric: took 11.390906ms to wait for pod list to return data ...
	I0103 20:13:16.125152   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:16.133036   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:16.133072   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:16.133086   61676 node_conditions.go:105] duration metric: took 7.928329ms to run NodePressure ...
	I0103 20:13:16.133109   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:16.519151   61676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530359   61676 kubeadm.go:787] kubelet initialised
	I0103 20:13:16.530380   61676 kubeadm.go:788] duration metric: took 11.203465ms waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530388   61676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:16.540797   61676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.550417   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550457   61676 pod_ready.go:81] duration metric: took 9.627239ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.550475   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550486   61676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.557664   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557693   61676 pod_ready.go:81] duration metric: took 7.191907ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.557705   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557721   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.566973   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567007   61676 pod_ready.go:81] duration metric: took 9.268451ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.567019   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567027   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.587777   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587811   61676 pod_ready.go:81] duration metric: took 20.769874ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.587825   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587832   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.923613   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923643   61676 pod_ready.go:81] duration metric: took 335.80096ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.923655   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923663   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.323875   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323911   61676 pod_ready.go:81] duration metric: took 400.238515ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.323922   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323931   61676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.724694   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724727   61676 pod_ready.go:81] duration metric: took 400.785148ms waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.724741   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724750   61676 pod_ready.go:38] duration metric: took 1.194352759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:17.724774   61676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:17.754724   61676 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:17.754762   61676 kubeadm.go:640] restartCluster took 20.835238159s
	I0103 20:13:17.754774   61676 kubeadm.go:406] StartCluster complete in 20.886921594s
	I0103 20:13:17.754794   61676 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.754875   61676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:17.757638   61676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.759852   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:13:17.759948   61676 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:17.760022   61676 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-451331"
	I0103 20:13:17.760049   61676 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-451331"
	W0103 20:13:17.760060   61676 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:17.760105   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.760154   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:17.760202   61676 addons.go:69] Setting default-storageclass=true in profile "embed-certs-451331"
	I0103 20:13:17.760227   61676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-451331"
	I0103 20:13:17.760525   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760553   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760595   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760619   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760814   61676 addons.go:69] Setting metrics-server=true in profile "embed-certs-451331"
	I0103 20:13:17.760869   61676 addons.go:237] Setting addon metrics-server=true in "embed-certs-451331"
	W0103 20:13:17.760887   61676 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:17.760949   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.761311   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.761367   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.778350   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0103 20:13:17.778603   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0103 20:13:17.778840   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.778947   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.779349   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779369   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779496   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779506   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779894   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.779936   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.780390   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0103 20:13:17.780507   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780528   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.780892   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780933   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.781532   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.782012   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.782030   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.782393   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.782580   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.786209   61676 addons.go:237] Setting addon default-storageclass=true in "embed-certs-451331"
	W0103 20:13:17.786231   61676 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:17.786264   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.786730   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.786761   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.796538   61676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-451331" context rescaled to 1 replicas
	I0103 20:13:17.796579   61676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:17.798616   61676 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:17.800702   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:17.799744   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0103 20:13:17.801004   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0103 20:13:17.801125   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.801622   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.801643   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.801967   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.802456   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.804195   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.804537   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.804683   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.804700   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.806577   61676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:17.805108   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.807681   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0103 20:13:17.808340   61676 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:17.808354   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:17.808371   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.808513   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.809005   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.809510   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.809529   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.809978   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.810778   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.810822   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.812250   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812607   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.812629   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812892   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.812970   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.813069   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.815321   61676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:17.813342   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.817289   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:17.817308   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:17.817336   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.817473   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.820418   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.820892   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.820920   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.821168   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.821350   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.821468   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.821597   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.829857   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0103 20:13:17.830343   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.830847   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.830869   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.831278   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.831432   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.833351   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.833678   61676 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:17.833695   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:17.833714   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.837454   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837708   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.837730   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837975   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.838211   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.838384   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.838534   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:18.036885   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:18.097340   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:18.099953   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:18.099982   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:18.242823   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:18.242847   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:18.309930   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:18.309959   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:18.321992   61676 node_ready.go:35] waiting up to 6m0s for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:18.322077   61676 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:18.366727   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:16.441666   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.161911946s)
	I0103 20:13:16.441698   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0103 20:13:16.441720   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441740   62015 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.155838517s)
	I0103 20:13:16.441767   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441855   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0103 20:13:16.441964   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:20.073248   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.975867864s)
	I0103 20:13:20.073318   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073383   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073265   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.03634078s)
	I0103 20:13:20.073419   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.706641739s)
	I0103 20:13:20.073466   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073490   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073489   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073553   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073744   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073759   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073775   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073786   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073878   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073905   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073935   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073938   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073980   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073992   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074016   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074036   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074073   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.074086   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074309   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074369   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074428   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074476   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074454   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074506   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074558   61676 addons.go:473] Verifying addon metrics-server=true in "embed-certs-451331"
	I0103 20:13:20.077560   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.077613   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.077653   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.088401   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.088441   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.088845   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.090413   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.090439   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.092641   61676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0103 20:13:16.593786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594320   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594352   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:16.594229   62835 retry.go:31] will retry after 1.232411416s: waiting for machine to come up
	I0103 20:13:17.828286   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832049   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832078   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:17.828787   62835 retry.go:31] will retry after 2.020753248s: waiting for machine to come up
	I0103 20:13:19.851119   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851683   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:19.851595   62835 retry.go:31] will retry after 2.720330873s: waiting for machine to come up
	I0103 20:13:20.094375   61676 addons.go:508] enable addons completed in 2.334425533s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0103 20:13:20.325950   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:22.327709   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:19.820972   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.379182556s)
	I0103 20:13:19.821009   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0103 20:13:19.821032   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.820976   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.378974193s)
	I0103 20:13:19.821081   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.821092   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0103 20:13:21.294764   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.47365805s)
	I0103 20:13:21.294796   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0103 20:13:21.294826   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:21.294879   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:24.067996   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.773083678s)
	I0103 20:13:24.068031   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0103 20:13:24.068071   62015 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:24.068131   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:22.573532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573959   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:22.573882   62835 retry.go:31] will retry after 2.869192362s: waiting for machine to come up
	I0103 20:13:25.444272   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444774   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444801   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:25.444710   62835 retry.go:31] will retry after 3.61848561s: waiting for machine to come up
	I0103 20:13:24.327795   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:24.831015   61676 node_ready.go:49] node "embed-certs-451331" has status "Ready":"True"
	I0103 20:13:24.831037   61676 node_ready.go:38] duration metric: took 6.509012992s waiting for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:24.831046   61676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:24.838244   61676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345945   61676 pod_ready.go:92] pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.345980   61676 pod_ready.go:81] duration metric: took 507.709108ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345991   61676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352763   61676 pod_ready.go:92] pod "etcd-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.352798   61676 pod_ready.go:81] duration metric: took 6.794419ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352812   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359491   61676 pod_ready.go:92] pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.359533   61676 pod_ready.go:81] duration metric: took 6.711829ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359547   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867866   61676 pod_ready.go:92] pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.867898   61676 pod_ready.go:81] duration metric: took 508.341809ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867912   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026106   61676 pod_ready.go:92] pod "kube-proxy-fsnb9" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.026140   61676 pod_ready.go:81] duration metric: took 158.216243ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026153   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428480   61676 pod_ready.go:92] pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.428506   61676 pod_ready.go:81] duration metric: took 402.345241ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428525   61676 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:28.438138   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:27.768745   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.700590535s)
	I0103 20:13:27.768774   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0103 20:13:27.768797   62015 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:27.768833   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:28.718165   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0103 20:13:28.718231   62015 cache_images.go:123] Successfully loaded all cached images
	I0103 20:13:28.718239   62015 cache_images.go:92] LoadImages completed in 17.301651166s
	I0103 20:13:28.718342   62015 ssh_runner.go:195] Run: crio config
	I0103 20:13:28.770786   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:28.770813   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:28.770838   62015 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:28.770862   62015 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.245 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-749210 NodeName:no-preload-749210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:28.771031   62015 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-749210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:28.771103   62015 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-749210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:13:28.771163   62015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 20:13:28.780756   62015 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:28.780834   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:28.789160   62015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0103 20:13:28.804638   62015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 20:13:28.820113   62015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0103 20:13:28.835707   62015 ssh_runner.go:195] Run: grep 192.168.61.245	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:28.839456   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:28.850530   62015 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210 for IP: 192.168.61.245
	I0103 20:13:28.850581   62015 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:28.850730   62015 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:28.850770   62015 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:28.850833   62015 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.key
	I0103 20:13:28.850886   62015 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key.5dd805e0
	I0103 20:13:28.850922   62015 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key
	I0103 20:13:28.851054   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:28.851081   62015 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:28.851093   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:28.851117   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:28.851139   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:28.851168   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:28.851210   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:28.851832   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:28.874236   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:13:28.896624   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:28.919016   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:13:28.941159   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:28.963311   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:28.985568   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:29.007709   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:29.030188   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:29.052316   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:29.076761   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:29.101462   62015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:29.118605   62015 ssh_runner.go:195] Run: openssl version
	I0103 20:13:29.124144   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:29.133148   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137750   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137809   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.143321   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:29.152302   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:29.161551   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166396   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166457   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.173179   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:29.184167   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:29.194158   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198763   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198836   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.204516   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:29.214529   62015 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:29.218834   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:29.225036   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:29.231166   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:29.237200   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:29.243158   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:29.249694   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:29.255582   62015 kubeadm.go:404] StartCluster: {Name:no-preload-749210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:29.255672   62015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:29.255758   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:29.299249   62015 cri.go:89] found id: ""
	I0103 20:13:29.299346   62015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:29.311210   62015 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:29.311227   62015 kubeadm.go:636] restartCluster start
	I0103 20:13:29.311271   62015 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:29.320430   62015 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:29.321471   62015 kubeconfig.go:92] found "no-preload-749210" server: "https://192.168.61.245:8443"
	I0103 20:13:29.324643   62015 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:29.333237   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.333300   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.345156   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.219284   61400 start.go:369] acquired machines lock for "old-k8s-version-927922" in 54.622555379s
	I0103 20:13:30.219352   61400 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:30.219364   61400 fix.go:54] fixHost starting: 
	I0103 20:13:30.219739   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:30.219770   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:30.235529   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0103 20:13:30.235926   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:30.236537   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:13:30.236562   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:30.236911   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:30.237121   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:30.237293   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:13:30.238979   61400 fix.go:102] recreateIfNeeded on old-k8s-version-927922: state=Stopped err=<nil>
	I0103 20:13:30.239006   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	W0103 20:13:30.239155   61400 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:30.241210   61400 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-927922" ...
	I0103 20:13:29.067586   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068030   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Found IP for machine: 192.168.39.139
	I0103 20:13:29.068048   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserving static IP address...
	I0103 20:13:29.068090   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has current primary IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.068532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | skip adding static IP to network mk-default-k8s-diff-port-018788 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"}
	I0103 20:13:29.068549   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserved static IP address: 192.168.39.139
	I0103 20:13:29.068571   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for SSH to be available...
	I0103 20:13:29.068608   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Getting to WaitForSSH function...
	I0103 20:13:29.071139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071587   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.071620   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071779   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH client type: external
	I0103 20:13:29.071810   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa (-rw-------)
	I0103 20:13:29.071858   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:29.071879   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | About to run SSH command:
	I0103 20:13:29.071896   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | exit 0
	I0103 20:13:29.166962   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:29.167365   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetConfigRaw
	I0103 20:13:29.167989   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.170671   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171052   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.171092   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171325   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:13:29.171564   62050 machine.go:88] provisioning docker machine ...
	I0103 20:13:29.171589   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.171866   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172058   62050 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018788"
	I0103 20:13:29.172084   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172253   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.175261   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175626   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.175660   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175749   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.175943   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176219   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176392   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.176611   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.177083   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.177105   62050 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018788 && echo "default-k8s-diff-port-018788" | sudo tee /etc/hostname
	I0103 20:13:29.304876   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018788
	
	I0103 20:13:29.304915   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.307645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308124   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.308190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.308619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308799   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308997   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.309177   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.309652   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.309682   62050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018788/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:29.431479   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:29.431517   62050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:29.431555   62050 buildroot.go:174] setting up certificates
	I0103 20:13:29.431569   62050 provision.go:83] configureAuth start
	I0103 20:13:29.431582   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.431900   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.435012   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435482   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.435517   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435638   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.437865   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438267   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.438303   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438388   62050 provision.go:138] copyHostCerts
	I0103 20:13:29.438448   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:29.438461   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:29.438527   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:29.438625   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:29.438633   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:29.438653   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:29.438713   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:29.438720   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:29.438738   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:29.438787   62050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018788 san=[192.168.39.139 192.168.39.139 localhost 127.0.0.1 minikube default-k8s-diff-port-018788]
	I0103 20:13:29.494476   62050 provision.go:172] copyRemoteCerts
	I0103 20:13:29.494562   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:29.494590   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.497330   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497597   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.497623   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.497956   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.498139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.498268   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:29.583531   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:29.605944   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0103 20:13:29.630747   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:29.656325   62050 provision.go:86] duration metric: configureAuth took 224.741883ms
	I0103 20:13:29.656355   62050 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:29.656525   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:29.656619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.659656   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.660213   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660434   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.660643   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.660864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.661019   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.661217   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.661571   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.661588   62050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:29.970938   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:29.970966   62050 machine.go:91] provisioned docker machine in 799.385733ms
	I0103 20:13:29.970975   62050 start.go:300] post-start starting for "default-k8s-diff-port-018788" (driver="kvm2")
	I0103 20:13:29.970985   62050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:29.971007   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.971387   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:29.971418   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.974114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974487   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.974562   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974706   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.974894   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.975075   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.975227   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.061987   62050 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:30.066591   62050 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:30.066620   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:30.066704   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:30.066795   62050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:30.066899   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:30.076755   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:30.099740   62050 start.go:303] post-start completed in 128.750887ms
	I0103 20:13:30.099763   62050 fix.go:56] fixHost completed within 20.287967183s
	I0103 20:13:30.099782   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.102744   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.103177   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103409   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.103633   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.103846   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.104080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.104308   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:30.104680   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:30.104696   62050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:30.219120   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312810.161605674
	
	I0103 20:13:30.219145   62050 fix.go:206] guest clock: 1704312810.161605674
	I0103 20:13:30.219154   62050 fix.go:219] Guest: 2024-01-03 20:13:30.161605674 +0000 UTC Remote: 2024-01-03 20:13:30.099767061 +0000 UTC m=+264.645600185 (delta=61.838613ms)
	I0103 20:13:30.219191   62050 fix.go:190] guest clock delta is within tolerance: 61.838613ms
	I0103 20:13:30.219202   62050 start.go:83] releasing machines lock for "default-k8s-diff-port-018788", held for 20.407440359s
	I0103 20:13:30.219230   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.219551   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:30.222200   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222616   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.222650   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222811   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223411   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223568   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223643   62050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:30.223686   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.223940   62050 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:30.223970   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.226394   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226746   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.226777   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227274   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.227443   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227446   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227567   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227595   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.227739   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227972   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.315855   62050 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:30.359117   62050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:30.499200   62050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:30.505296   62050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:30.505768   62050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:30.520032   62050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:30.520059   62050 start.go:475] detecting cgroup driver to use...
	I0103 20:13:30.520146   62050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:30.532684   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:30.545152   62050 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:30.545222   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:30.558066   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:30.570999   62050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:30.682484   62050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:30.802094   62050 docker.go:219] disabling docker service ...
	I0103 20:13:30.802171   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:30.815796   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:30.827982   62050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:30.952442   62050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:31.068759   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:31.083264   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:31.102893   62050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:31.102979   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.112366   62050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:31.112433   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.122940   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.133385   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.144251   62050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:31.155210   62050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:31.164488   62050 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:31.164552   62050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:31.177632   62050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:31.186983   62050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:31.309264   62050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:31.493626   62050 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:31.493706   62050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:31.504103   62050 start.go:543] Will wait 60s for crictl version
	I0103 20:13:31.504187   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:13:31.507927   62050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:31.543967   62050 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:31.544046   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.590593   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.639562   62050 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:13:30.242808   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Start
	I0103 20:13:30.242991   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring networks are active...
	I0103 20:13:30.243776   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network default is active
	I0103 20:13:30.244126   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network mk-old-k8s-version-927922 is active
	I0103 20:13:30.244504   61400 main.go:141] libmachine: (old-k8s-version-927922) Getting domain xml...
	I0103 20:13:30.245244   61400 main.go:141] libmachine: (old-k8s-version-927922) Creating domain...
	I0103 20:13:31.553239   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting to get IP...
	I0103 20:13:31.554409   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.554942   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.555022   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.554922   63030 retry.go:31] will retry after 192.654673ms: waiting for machine to come up
	I0103 20:13:31.749588   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.750035   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.750058   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.750000   63030 retry.go:31] will retry after 270.810728ms: waiting for machine to come up
	I0103 20:13:32.022736   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.023310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.023337   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.023280   63030 retry.go:31] will retry after 327.320898ms: waiting for machine to come up
	I0103 20:13:32.352845   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.353453   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.353501   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.353395   63030 retry.go:31] will retry after 575.525231ms: waiting for machine to come up
	I0103 20:13:32.930217   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.930833   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.930859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.930741   63030 retry.go:31] will retry after 571.986596ms: waiting for machine to come up
	I0103 20:13:30.936363   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:32.939164   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:29.833307   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.833374   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.844819   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.333870   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.333936   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.345802   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.833281   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.833400   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.848469   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.334071   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.334151   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.346445   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.833944   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.834034   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.848925   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.333349   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.333432   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.349173   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.833632   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.833696   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.848186   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.333659   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.333757   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.349560   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.834221   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.834309   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.846637   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:34.334219   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.334299   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.350703   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.641182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:31.644371   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644677   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:31.644712   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644971   62050 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:31.649106   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:31.662256   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:13:31.662380   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:31.701210   62050 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:13:31.701275   62050 ssh_runner.go:195] Run: which lz4
	I0103 20:13:31.704890   62050 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:31.708756   62050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:31.708783   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:13:33.543202   62050 crio.go:444] Took 1.838336 seconds to copy over tarball
	I0103 20:13:33.543282   62050 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:33.504797   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:33.505336   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:33.505363   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:33.505286   63030 retry.go:31] will retry after 593.865088ms: waiting for machine to come up
	I0103 20:13:34.101055   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:34.101559   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:34.101593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:34.101507   63030 retry.go:31] will retry after 1.016460442s: waiting for machine to come up
	I0103 20:13:35.119877   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:35.120383   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:35.120415   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:35.120352   63030 retry.go:31] will retry after 1.462823241s: waiting for machine to come up
	I0103 20:13:36.585467   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:36.585968   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:36.585993   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:36.585932   63030 retry.go:31] will retry after 1.213807131s: waiting for machine to come up
	I0103 20:13:37.801504   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:37.801970   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:37.801999   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:37.801896   63030 retry.go:31] will retry after 1.961227471s: waiting for machine to come up
	I0103 20:13:35.435661   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:37.435870   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:34.834090   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.834160   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.848657   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.333723   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.333809   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.348582   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.834128   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.834208   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.845911   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.333385   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.333512   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.346391   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.833978   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.834054   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.847134   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.333698   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.333785   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.346411   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.834024   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.834141   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.846961   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.333461   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.333665   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.346713   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.834378   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.834470   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.848473   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.333266   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.333347   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.345638   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.345664   62015 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:39.345692   62015 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:39.345721   62015 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:39.345792   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:39.387671   62015 cri.go:89] found id: ""
	I0103 20:13:39.387778   62015 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:39.403523   62015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:39.413114   62015 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:39.413188   62015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421503   62015 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421527   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:39.561406   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:36.473303   62050 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.929985215s)
	I0103 20:13:36.473337   62050 crio.go:451] Took 2.930104 seconds to extract the tarball
	I0103 20:13:36.473350   62050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:36.513202   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:36.557201   62050 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:13:36.557231   62050 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:13:36.557314   62050 ssh_runner.go:195] Run: crio config
	I0103 20:13:36.618916   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:36.618948   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:36.618982   62050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:36.619007   62050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.139 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018788 NodeName:default-k8s-diff-port-018788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:36.619167   62050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.139
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:36.619242   62050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-018788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0103 20:13:36.619294   62050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:13:36.628488   62050 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:36.628571   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:36.637479   62050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0103 20:13:36.652608   62050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:13:36.667432   62050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0103 20:13:36.683138   62050 ssh_runner.go:195] Run: grep 192.168.39.139	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:36.687022   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:36.698713   62050 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788 for IP: 192.168.39.139
	I0103 20:13:36.698755   62050 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:36.698948   62050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:36.699009   62050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:36.699098   62050 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.key
	I0103 20:13:36.699157   62050 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key.7716debd
	I0103 20:13:36.699196   62050 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key
	I0103 20:13:36.699287   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:36.699314   62050 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:36.699324   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:36.699349   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:36.699370   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:36.699395   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:36.699434   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:36.700045   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:36.721872   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:13:36.744733   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:36.772245   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 20:13:36.796690   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:36.819792   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:36.843109   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:36.866679   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:36.889181   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:36.912082   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:36.935621   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:36.959090   62050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:36.974873   62050 ssh_runner.go:195] Run: openssl version
	I0103 20:13:36.980449   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:36.990278   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995822   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995903   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:37.001504   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:37.011628   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:37.021373   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025697   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025752   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.031286   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:37.041075   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:37.050789   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055584   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055647   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.061079   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:37.070792   62050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:37.075050   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:37.081170   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:37.087372   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:37.093361   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:37.099203   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:37.104932   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:37.110783   62050 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:37.110955   62050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:37.111003   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:37.146687   62050 cri.go:89] found id: ""
	I0103 20:13:37.146766   62050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:37.156789   62050 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:37.156808   62050 kubeadm.go:636] restartCluster start
	I0103 20:13:37.156882   62050 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:37.166168   62050 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.167346   62050 kubeconfig.go:92] found "default-k8s-diff-port-018788" server: "https://192.168.39.139:8444"
	I0103 20:13:37.169750   62050 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:37.178965   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.179035   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.190638   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.679072   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.679142   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.691149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.179709   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.179804   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.191656   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.679825   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.679912   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.693380   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.179927   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.180042   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.193368   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.679947   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.680049   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.692444   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:40.179510   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.179600   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.192218   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.764226   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:39.764651   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:39.764681   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:39.764592   63030 retry.go:31] will retry after 2.38598238s: waiting for machine to come up
	I0103 20:13:42.151992   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:42.152486   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:42.152517   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:42.152435   63030 retry.go:31] will retry after 3.320569317s: waiting for machine to come up
	I0103 20:13:39.438887   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:41.441552   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:40.707462   62015 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.146014282s)
	I0103 20:13:40.707501   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:40.913812   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.008294   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.093842   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:41.093931   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:41.594484   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.094333   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.594647   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.094744   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.594323   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.628624   62015 api_server.go:72] duration metric: took 2.534781213s to wait for apiserver process to appear ...
	I0103 20:13:43.628653   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:43.628674   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:40.679867   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.679959   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.692707   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.179865   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.179962   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.192901   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.679604   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.679668   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.691755   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.179959   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.180082   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.193149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.679682   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.679808   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.696777   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.179236   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.179343   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.195021   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.679230   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.679339   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.696886   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.179488   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.179558   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.194865   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.679087   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.679216   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.693383   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.179505   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.179607   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.190496   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.474145   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:45.474596   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:45.474623   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:45.474542   63030 retry.go:31] will retry after 3.652901762s: waiting for machine to come up
	I0103 20:13:43.937146   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:45.938328   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.941499   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.277935   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:47.277971   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:47.277988   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.543418   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.543449   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:47.629720   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.635340   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.635373   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.128849   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.135534   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:48.135576   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.628977   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.634609   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:13:48.643475   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:13:48.643505   62015 api_server.go:131] duration metric: took 5.01484434s to wait for apiserver health ...
	I0103 20:13:48.643517   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:48.643526   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:48.645945   62015 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:48.647556   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:48.671093   62015 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:48.698710   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:48.712654   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:48.712704   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:48.712717   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:48.712729   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:48.712739   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:48.712761   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:48.712771   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:48.712780   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:48.712793   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:48.712806   62015 system_pods.go:74] duration metric: took 14.071881ms to wait for pod list to return data ...
	I0103 20:13:48.712818   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:48.716271   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:48.716301   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:48.716326   62015 node_conditions.go:105] duration metric: took 3.496257ms to run NodePressure ...
	I0103 20:13:48.716348   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:49.020956   62015 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:49.025982   62015 kubeadm.go:787] kubelet initialised
	I0103 20:13:49.026003   62015 kubeadm.go:788] duration metric: took 5.022549ms waiting for restarted kubelet to initialise ...
	I0103 20:13:49.026010   62015 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:49.033471   62015 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.038777   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038806   62015 pod_ready.go:81] duration metric: took 5.286579ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.038823   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038834   62015 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.044324   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044349   62015 pod_ready.go:81] duration metric: took 5.506628ms waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.044357   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044363   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.049022   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049058   62015 pod_ready.go:81] duration metric: took 4.681942ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.049068   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049073   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.102378   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102407   62015 pod_ready.go:81] duration metric: took 53.323019ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.102415   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102424   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.504820   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504852   62015 pod_ready.go:81] duration metric: took 402.417876ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.504865   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504875   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.905230   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905265   62015 pod_ready.go:81] duration metric: took 400.380902ms waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.905278   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905287   62015 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:50.304848   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304883   62015 pod_ready.go:81] duration metric: took 399.567527ms waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:50.304896   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304905   62015 pod_ready.go:38] duration metric: took 1.278887327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:50.304926   62015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:50.331405   62015 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:50.331428   62015 kubeadm.go:640] restartCluster took 21.020194358s
	I0103 20:13:50.331439   62015 kubeadm.go:406] StartCluster complete in 21.075864121s
	I0103 20:13:50.331459   62015 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.331541   62015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:50.333553   62015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.333969   62015 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:50.334045   62015 addons.go:69] Setting storage-provisioner=true in profile "no-preload-749210"
	I0103 20:13:50.334064   62015 addons.go:237] Setting addon storage-provisioner=true in "no-preload-749210"
	W0103 20:13:50.334072   62015 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:50.334082   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:50.334121   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.334129   62015 addons.go:69] Setting default-storageclass=true in profile "no-preload-749210"
	I0103 20:13:50.334143   62015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-749210"
	I0103 20:13:50.334556   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334588   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334602   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334620   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334681   62015 addons.go:69] Setting metrics-server=true in profile "no-preload-749210"
	I0103 20:13:50.334708   62015 addons.go:237] Setting addon metrics-server=true in "no-preload-749210"
	I0103 20:13:50.334712   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	W0103 20:13:50.334717   62015 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:50.334756   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.335152   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.335190   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.343173   62015 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-749210" context rescaled to 1 replicas
	I0103 20:13:50.343213   62015 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:50.345396   62015 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:50.347721   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:50.353122   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I0103 20:13:50.353250   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0103 20:13:50.353274   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0103 20:13:50.353737   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.353896   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354283   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354299   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354488   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354491   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354588   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354889   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355115   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.355165   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.355181   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.355244   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355746   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.356199   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356239   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.356792   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356830   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.359095   62015 addons.go:237] Setting addon default-storageclass=true in "no-preload-749210"
	W0103 20:13:50.359114   62015 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:50.359139   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.359554   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.359595   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.377094   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0103 20:13:50.377218   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0103 20:13:50.377679   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.377779   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.378353   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378376   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378472   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378488   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378816   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.378874   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.381013   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.381240   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.389265   62015 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:50.383848   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I0103 20:13:50.391000   62015 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.391023   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:50.391049   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391062   62015 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:45.679265   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.679374   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.690232   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.179862   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.179963   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.190942   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.679624   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.679738   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.691578   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.179185   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:47.179280   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:47.193995   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.194029   62050 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:47.194050   62050 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:47.194061   62050 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:47.194114   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:47.235512   62050 cri.go:89] found id: ""
	I0103 20:13:47.235625   62050 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:47.251115   62050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:47.261566   62050 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:47.261631   62050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271217   62050 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271244   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:47.408550   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.262356   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.492357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.597607   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.699097   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:48.699194   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.199349   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.699758   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.199818   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.392557   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:50.392577   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:50.392597   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391469   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.393835   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.393854   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.394340   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.394967   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395384   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.395419   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.395602   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.395663   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.395683   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.395981   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.396173   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.398544   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399117   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.399142   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399363   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.399582   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.399692   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.399761   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.434719   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I0103 20:13:50.435279   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.435938   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.435972   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.436407   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.436630   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.438992   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.442816   62015 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.442835   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:50.442856   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.450157   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.451549   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.451575   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.451571   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.453023   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.453577   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.453753   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.556135   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:50.556161   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:50.583620   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:50.583643   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:50.589708   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.614203   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.631936   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.631961   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:50.708658   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.772364   62015 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:50.772434   62015 node_ready.go:35] waiting up to 6m0s for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:51.785361   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195620446s)
	I0103 20:13:51.785407   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785421   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785427   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171187695s)
	I0103 20:13:51.785463   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785488   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785603   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076908391s)
	I0103 20:13:51.785687   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785717   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.785730   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.785739   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785741   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785748   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785819   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785837   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786108   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786143   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786152   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786166   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.786178   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786444   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786495   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786536   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786553   62015 addons.go:473] Verifying addon metrics-server=true in "no-preload-749210"
	I0103 20:13:51.787346   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787365   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787376   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.787386   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.787596   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787638   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787652   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787855   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787859   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787871   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.797560   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.797584   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.797860   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.797874   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.800087   62015 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0103 20:13:49.131462   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132013   61400 main.go:141] libmachine: (old-k8s-version-927922) Found IP for machine: 192.168.72.12
	I0103 20:13:49.132041   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserving static IP address...
	I0103 20:13:49.132059   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has current primary IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132507   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.132543   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | skip adding static IP to network mk-old-k8s-version-927922 - found existing host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"}
	I0103 20:13:49.132560   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserved static IP address: 192.168.72.12
	I0103 20:13:49.132582   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting for SSH to be available...
	I0103 20:13:49.132597   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Getting to WaitForSSH function...
	I0103 20:13:49.135129   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135499   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.135536   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135703   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH client type: external
	I0103 20:13:49.135728   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa (-rw-------)
	I0103 20:13:49.135765   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:49.135780   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | About to run SSH command:
	I0103 20:13:49.135796   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | exit 0
	I0103 20:13:49.226568   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:49.226890   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetConfigRaw
	I0103 20:13:49.227536   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.230668   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231038   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.231064   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231277   61400 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/config.json ...
	I0103 20:13:49.231456   61400 machine.go:88] provisioning docker machine ...
	I0103 20:13:49.231473   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:49.231708   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.231862   61400 buildroot.go:166] provisioning hostname "old-k8s-version-927922"
	I0103 20:13:49.231885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.232002   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.234637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235012   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.235048   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235196   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.235338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235445   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235543   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.235748   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.236196   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.236226   61400 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-927922 && echo "old-k8s-version-927922" | sudo tee /etc/hostname
	I0103 20:13:49.377588   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-927922
	
	I0103 20:13:49.377625   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.381244   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381634   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.381680   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.382115   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382311   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382538   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.382721   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.383096   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.383125   61400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-927922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-927922/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-927922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:49.517214   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:49.517246   61400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:49.517268   61400 buildroot.go:174] setting up certificates
	I0103 20:13:49.517280   61400 provision.go:83] configureAuth start
	I0103 20:13:49.517299   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.517606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.520819   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521255   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.521284   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521442   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.523926   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.524364   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524495   61400 provision.go:138] copyHostCerts
	I0103 20:13:49.524604   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:49.524618   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:49.524714   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:49.524842   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:49.524855   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:49.524885   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:49.524982   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:49.525020   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:49.525063   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:49.525143   61400 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-927922 san=[192.168.72.12 192.168.72.12 localhost 127.0.0.1 minikube old-k8s-version-927922]
	I0103 20:13:49.896621   61400 provision.go:172] copyRemoteCerts
	I0103 20:13:49.896687   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:49.896728   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.899859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900239   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.900274   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900456   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.900690   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.900873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.901064   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:49.993569   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:13:50.017597   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:13:50.041139   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:50.064499   61400 provision.go:86] duration metric: configureAuth took 547.178498ms
	I0103 20:13:50.064533   61400 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:50.064770   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:13:50.064848   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.068198   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.068672   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.069080   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069284   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069457   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.069640   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.070115   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.070146   61400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:50.450845   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:50.450873   61400 machine.go:91] provisioned docker machine in 1.219404511s
	I0103 20:13:50.450886   61400 start.go:300] post-start starting for "old-k8s-version-927922" (driver="kvm2")
	I0103 20:13:50.450899   61400 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:50.450924   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.451263   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:50.451328   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.455003   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455413   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.455436   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455644   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.455796   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.455919   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.456031   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.563846   61400 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:50.569506   61400 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:50.569532   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:50.569626   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:50.569726   61400 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:50.569857   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:50.581218   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:50.612328   61400 start.go:303] post-start completed in 161.425373ms
	I0103 20:13:50.612359   61400 fix.go:56] fixHost completed within 20.392994827s
	I0103 20:13:50.612383   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.615776   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616241   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.616268   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616368   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.616655   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.616849   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.617088   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.617286   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.617764   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.617791   61400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:50.740437   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312830.691065491
	
	I0103 20:13:50.740506   61400 fix.go:206] guest clock: 1704312830.691065491
	I0103 20:13:50.740528   61400 fix.go:219] Guest: 2024-01-03 20:13:50.691065491 +0000 UTC Remote: 2024-01-03 20:13:50.612363446 +0000 UTC m=+357.606588552 (delta=78.702045ms)
	I0103 20:13:50.740563   61400 fix.go:190] guest clock delta is within tolerance: 78.702045ms
	I0103 20:13:50.740574   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 20.521248173s
	I0103 20:13:50.740606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.740879   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:50.743952   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744357   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.744397   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744668   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.745932   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746302   61400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:50.746343   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.746759   61400 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:50.746784   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.749593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.749994   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.750029   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.750496   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.750738   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.750900   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.751141   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.751696   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.751779   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.751842   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.751898   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.751960   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.752031   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.752063   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.841084   61400 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:50.882564   61400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:51.041188   61400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:51.049023   61400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:51.049103   61400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:51.068267   61400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:51.068297   61400 start.go:475] detecting cgroup driver to use...
	I0103 20:13:51.068371   61400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:51.086266   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:51.101962   61400 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:51.102030   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:51.118269   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:51.134642   61400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:51.310207   61400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:51.495609   61400 docker.go:219] disabling docker service ...
	I0103 20:13:51.495743   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:51.512101   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:51.527244   61400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:51.696874   61400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:51.836885   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:51.849905   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:51.867827   61400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0103 20:13:51.867895   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.877598   61400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:51.877713   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.886744   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.898196   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.910021   61400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:51.921882   61400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:51.930668   61400 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:51.930727   61400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:51.943294   61400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:51.952273   61400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:52.065108   61400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:52.272042   61400 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:52.272143   61400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:52.277268   61400 start.go:543] Will wait 60s for crictl version
	I0103 20:13:52.277436   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:52.281294   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:52.334056   61400 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:52.334231   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.390900   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.454400   61400 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0103 20:13:52.455682   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:52.459194   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.459656   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:52.459683   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.460250   61400 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:52.465579   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:52.480500   61400 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 20:13:52.480620   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:52.532378   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:52.532450   61400 ssh_runner.go:195] Run: which lz4
	I0103 20:13:52.537132   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:52.541880   61400 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:52.541912   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0103 20:13:50.443235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:52.942235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:51.801673   62015 addons.go:508] enable addons completed in 1.467711333s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0103 20:13:52.779944   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.699945   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.199773   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.227739   62050 api_server.go:72] duration metric: took 2.52863821s to wait for apiserver process to appear ...
	I0103 20:13:51.227768   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:51.227789   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:51.228288   62050 api_server.go:269] stopped: https://192.168.39.139:8444/healthz: Get "https://192.168.39.139:8444/healthz": dial tcp 192.168.39.139:8444: connect: connection refused
	I0103 20:13:51.728906   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.679221   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.679255   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.679273   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.722466   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.722528   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.728699   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.771739   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:55.771841   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.228041   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.234578   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.234618   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.728122   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.734464   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.734505   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:57.228124   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:57.239527   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:13:57.253416   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:57.253445   62050 api_server.go:131] duration metric: took 6.025669125s to wait for apiserver health ...
	I0103 20:13:57.253456   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:57.253464   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:57.255608   62050 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:54.091654   61400 crio.go:444] Took 1.554550 seconds to copy over tarball
	I0103 20:13:54.091734   61400 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:57.252728   61400 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.160960283s)
	I0103 20:13:57.252762   61400 crio.go:451] Took 3.161068 seconds to extract the tarball
	I0103 20:13:57.252773   61400 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:57.307431   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:57.362170   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:57.362199   61400 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:57.362266   61400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.362306   61400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.362491   61400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.362505   61400 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0103 20:13:57.362630   61400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.362663   61400 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.362749   61400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.362830   61400 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364964   61400 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364981   61400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.364999   61400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.365049   61400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.365081   61400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.365159   61400 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0103 20:13:57.365337   61400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.365364   61400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.585886   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.611291   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0103 20:13:57.622467   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.623443   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.627321   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.630211   61400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0103 20:13:57.630253   61400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.630299   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.647358   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.670079   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.724516   61400 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0103 20:13:57.724560   61400 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0103 20:13:57.724606   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.747338   61400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0103 20:13:57.747387   61400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.747451   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767682   61400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0103 20:13:57.767741   61400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.767749   61400 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0103 20:13:57.767772   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.767782   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767778   61400 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.767834   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811841   61400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0103 20:13:57.811895   61400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.811861   61400 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0103 20:13:57.811948   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811984   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0103 20:13:57.811948   61400 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.812053   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.812098   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.812128   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.849648   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0103 20:13:57.849722   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.916421   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0103 20:13:57.916483   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.916529   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0103 20:13:57.936449   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.936474   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0103 20:13:57.936485   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0103 20:13:57.936538   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0103 20:13:55.436957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:57.441634   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:55.278078   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:57.280673   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:58.185787   62015 node_ready.go:49] node "no-preload-749210" has status "Ready":"True"
	I0103 20:13:58.185819   62015 node_ready.go:38] duration metric: took 7.413368774s waiting for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:58.185837   62015 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:58.196599   62015 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203024   62015 pod_ready.go:92] pod "coredns-76f75df574-rbx58" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:58.203047   62015 pod_ready.go:81] duration metric: took 6.423108ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203057   62015 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:57.257123   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:57.293641   62050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:57.341721   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:57.360995   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:57.361054   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:57.361065   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:57.361109   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:57.361132   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:57.361147   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:57.361171   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:57.361189   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:57.361198   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:57.361207   62050 system_pods.go:74] duration metric: took 19.402129ms to wait for pod list to return data ...
	I0103 20:13:57.361218   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:57.369396   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:57.369435   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:57.369449   62050 node_conditions.go:105] duration metric: took 8.224276ms to run NodePressure ...
	I0103 20:13:57.369470   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:57.615954   62050 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624280   62050 kubeadm.go:787] kubelet initialised
	I0103 20:13:57.624312   62050 kubeadm.go:788] duration metric: took 8.328431ms waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624321   62050 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:57.637920   62050 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.734401   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734439   62050 pod_ready.go:81] duration metric: took 1.096478242s waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:58.734454   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734463   62050 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:59.605120   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605156   62050 pod_ready.go:81] duration metric: took 870.676494ms waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:59.605168   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605174   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.176543   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176583   62050 pod_ready.go:81] duration metric: took 571.400586ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.176599   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176608   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.201556   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201620   62050 pod_ready.go:81] duration metric: took 24.987825ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.201637   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201647   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.233069   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233108   62050 pod_ready.go:81] duration metric: took 31.451633ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.233127   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233135   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.253505   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253534   62050 pod_ready.go:81] duration metric: took 20.386039ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.253550   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253559   62050 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.272626   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272661   62050 pod_ready.go:81] duration metric: took 19.09311ms waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.272677   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272687   62050 pod_ready.go:38] duration metric: took 2.64835186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:00.272705   62050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:00.321126   62050 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:00.321189   62050 kubeadm.go:640] restartCluster took 23.164374098s
	I0103 20:14:00.321205   62050 kubeadm.go:406] StartCluster complete in 23.210428007s
	I0103 20:14:00.321226   62050 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.321322   62050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:00.323470   62050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.323925   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:00.324242   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:14:00.324381   62050 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:00.324467   62050 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.324487   62050 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.324495   62050 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:00.324536   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.324984   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325013   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325285   62050 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325304   62050 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325329   62050 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.325337   62050 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:00.325376   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.325309   62050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018788"
	I0103 20:14:00.325722   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325740   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325935   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.326021   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.347496   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0103 20:14:00.347895   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.348392   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.348415   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.348728   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.349192   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.349228   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.349916   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0103 20:14:00.350369   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.351043   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.351067   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.351579   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.352288   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.352392   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.358540   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0103 20:14:00.359079   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.359582   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.359607   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.359939   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.360114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.364583   62050 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.364614   62050 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:00.364645   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.365032   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.365080   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.365268   62050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-018788" context rescaled to 1 replicas
	I0103 20:14:00.365315   62050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:00.367628   62050 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:00.376061   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:00.382421   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0103 20:14:00.382601   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0103 20:14:00.382708   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0103 20:14:00.383285   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383310   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383855   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.383862   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.384200   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384674   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.384701   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.384740   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384914   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.386513   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.387010   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.387325   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.387343   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.389302   62050 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:00.390931   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:00.390945   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:00.390960   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.390651   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.392318   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.394641   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395185   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.395212   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395483   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.395954   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.398448   62050 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:00.400431   62050 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.400454   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:00.400476   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.404480   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405112   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.405145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.405971   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.407610   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.407808   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.410796   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.410964   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.411436   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.417626   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0103 20:14:00.418201   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.422710   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.422743   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.423232   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.423421   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.425364   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.425678   62050 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.425697   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:00.425717   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.429190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429720   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.429745   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429898   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.430599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.430803   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.430946   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.621274   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:00.621356   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:00.641979   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.681414   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.682076   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:00.682118   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:00.760063   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.760095   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:00.833648   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.840025   62050 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:00.840147   62050 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.78156374s)
	I0103 20:14:02.423631   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423646   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.742133551s)
	I0103 20:14:02.423765   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423784   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423889   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.423906   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.423920   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423930   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424042   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424061   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424078   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.424076   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.424104   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424125   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424137   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424472   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424489   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424502   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.431339   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.431368   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.431754   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.431789   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.431809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.575829   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742131608s)
	I0103 20:14:02.575880   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.575899   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576351   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576374   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576391   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.576400   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576619   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576632   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576641   62050 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-018788"
	I0103 20:14:02.578918   62050 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:13:58.180342   61400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0103 20:13:58.180407   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0103 20:13:58.180464   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0103 20:13:58.194447   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:58.726157   61400 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0103 20:13:58.726232   61400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0103 20:14:00.187852   61400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.461700942s)
	I0103 20:14:00.187973   61400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461718478s)
	I0103 20:14:00.188007   61400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0103 20:14:00.188104   61400 cache_images.go:92] LoadImages completed in 2.825887616s
	W0103 20:14:00.188202   61400 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0103 20:14:00.188285   61400 ssh_runner.go:195] Run: crio config
	I0103 20:14:00.270343   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:00.270372   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:00.270393   61400 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:14:00.270416   61400 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.12 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-927922 NodeName:old-k8s-version-927922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 20:14:00.270624   61400 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-927922"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-927922
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.12:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:14:00.270765   61400 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-927922 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:14:00.270842   61400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0103 20:14:00.282011   61400 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:14:00.282093   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:14:00.292954   61400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0103 20:14:00.314616   61400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:14:00.366449   61400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0103 20:14:00.406579   61400 ssh_runner.go:195] Run: grep 192.168.72.12	control-plane.minikube.internal$ /etc/hosts
	I0103 20:14:00.410923   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:14:00.430315   61400 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922 for IP: 192.168.72.12
	I0103 20:14:00.430352   61400 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.430553   61400 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:14:00.430619   61400 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:14:00.430718   61400 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.key
	I0103 20:14:00.430798   61400 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key.9a91cab3
	I0103 20:14:00.430854   61400 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key
	I0103 20:14:00.431018   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:14:00.431071   61400 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:14:00.431083   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:14:00.431123   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:14:00.431158   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:14:00.431195   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:14:00.431250   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:14:00.432123   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:14:00.472877   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:14:00.505153   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:14:00.533850   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:14:00.564548   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:14:00.596464   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:14:00.626607   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:14:00.655330   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:14:00.681817   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:14:00.711039   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:14:00.742406   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:14:00.768583   61400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:14:00.786833   61400 ssh_runner.go:195] Run: openssl version
	I0103 20:14:00.793561   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:14:00.807558   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812755   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812816   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.820657   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:14:00.832954   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:14:00.844707   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850334   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850425   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.856592   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:14:00.868105   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:14:00.881551   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886462   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886550   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.892487   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:14:00.904363   61400 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:14:00.909429   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:14:00.915940   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:14:00.922496   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:14:00.928504   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:14:00.936016   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:14:00.943008   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:14:00.949401   61400 kubeadm.go:404] StartCluster: {Name:old-k8s-version-927922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:14:00.949524   61400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:14:00.949614   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:00.999406   61400 cri.go:89] found id: ""
	I0103 20:14:00.999494   61400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:14:01.011041   61400 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:14:01.011063   61400 kubeadm.go:636] restartCluster start
	I0103 20:14:01.011130   61400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:14:01.024488   61400 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.026094   61400 kubeconfig.go:92] found "old-k8s-version-927922" server: "https://192.168.72.12:8443"
	I0103 20:14:01.029577   61400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:14:01.041599   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.041674   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.055545   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.542034   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.542135   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.554826   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.042049   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.042166   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.056693   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.542275   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.542363   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.557025   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:03.041864   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.041968   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.054402   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:59.937077   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.440275   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.287822   62015 pod_ready.go:102] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.712464   62015 pod_ready.go:92] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.712486   62015 pod_ready.go:81] duration metric: took 2.509421629s waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.712494   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722133   62015 pod_ready.go:92] pod "kube-apiserver-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.722175   62015 pod_ready.go:81] duration metric: took 9.671952ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722188   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728860   62015 pod_ready.go:92] pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.728888   62015 pod_ready.go:81] duration metric: took 6.691622ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728901   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736669   62015 pod_ready.go:92] pod "kube-proxy-5hwf4" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.736690   62015 pod_ready.go:81] duration metric: took 7.783204ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736699   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245720   62015 pod_ready.go:92] pod "kube-scheduler-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:02.245750   62015 pod_ready.go:81] duration metric: took 1.509042822s waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245764   62015 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:04.253082   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.580440   62050 addons.go:508] enable addons completed in 2.256058454s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:02.845486   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:05.343961   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:03.542326   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.542407   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.554128   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.041685   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.041779   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.053727   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.542332   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.542417   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.554478   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.042026   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.042120   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.055763   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.541892   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.541996   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.554974   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.042576   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.042675   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.055902   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.542543   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.542636   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.555494   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.041757   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.041844   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.053440   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.542083   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.542162   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.555336   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:08.041841   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.041929   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.055229   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.936356   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.938795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.754049   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:09.253568   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.345058   62050 node_ready.go:49] node "default-k8s-diff-port-018788" has status "Ready":"True"
	I0103 20:14:06.345083   62050 node_ready.go:38] duration metric: took 5.505020144s waiting for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:06.345094   62050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:06.351209   62050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357786   62050 pod_ready.go:92] pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:06.357811   62050 pod_ready.go:81] duration metric: took 6.576128ms waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357819   62050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:08.365570   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.366402   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:08.542285   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.542428   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.554155   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.041695   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.041800   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.054337   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.541733   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.541817   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.554231   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.041785   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.041863   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.053870   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.541893   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.541988   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.554220   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.042579   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:11.042662   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:11.054683   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.054717   61400 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:14:11.054728   61400 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:14:11.054738   61400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:14:11.054804   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:11.099741   61400 cri.go:89] found id: ""
	I0103 20:14:11.099806   61400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:14:11.115939   61400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:14:11.125253   61400 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:14:11.125309   61400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134126   61400 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134151   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:11.244373   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.026578   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.238755   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.326635   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.411494   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:12.411597   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:12.912324   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:09.437304   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.937833   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.755341   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.254295   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.864860   62050 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.864892   62050 pod_ready.go:81] duration metric: took 4.507065243s waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.864906   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871510   62050 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.871532   62050 pod_ready.go:81] duration metric: took 6.618246ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871542   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877385   62050 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.877411   62050 pod_ready.go:81] duration metric: took 5.859396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877423   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883355   62050 pod_ready.go:92] pod "kube-proxy-wqjlv" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.883381   62050 pod_ready.go:81] duration metric: took 5.949857ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883391   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888160   62050 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.888186   62050 pod_ready.go:81] duration metric: took 4.782893ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888198   62050 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:12.896310   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.897306   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:13.412544   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.912006   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.939301   61400 api_server.go:72] duration metric: took 1.527807222s to wait for apiserver process to appear ...
	I0103 20:14:13.939328   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:13.939357   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:13.941001   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.438272   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.752567   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.758446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:17.397429   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:19.399199   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.940403   61400 api_server.go:269] stopped: https://192.168.72.12:8443/healthz: Get "https://192.168.72.12:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0103 20:14:18.940444   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.563874   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.563907   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.563925   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.591366   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.591397   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.939684   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.951743   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:19.951795   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.439712   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.448251   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:20.448289   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.939773   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.946227   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:20.954666   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:20.954702   61400 api_server.go:131] duration metric: took 7.015366394s to wait for apiserver health ...
	I0103 20:14:20.954718   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:20.954726   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:20.956786   61400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:14:20.958180   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:14:20.969609   61400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:14:20.986353   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:20.996751   61400 system_pods.go:59] 8 kube-system pods found
	I0103 20:14:20.996786   61400 system_pods.go:61] "coredns-5644d7b6d9-99qhg" [d43c98b2-5ed4-42a7-bdb9-28f5b3c7b99f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:14:20.996795   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:20.996804   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:20.996811   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:20.996821   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Pending
	I0103 20:14:20.996828   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:20.996835   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:20.996845   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:20.996857   61400 system_pods.go:74] duration metric: took 10.474644ms to wait for pod list to return data ...
	I0103 20:14:20.996870   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:21.000635   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:21.000665   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:21.000677   61400 node_conditions.go:105] duration metric: took 3.80125ms to run NodePressure ...
	I0103 20:14:21.000698   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:21.233310   61400 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241408   61400 kubeadm.go:787] kubelet initialised
	I0103 20:14:21.241445   61400 kubeadm.go:788] duration metric: took 8.096237ms waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241456   61400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:21.251897   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.264624   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264657   61400 pod_ready.go:81] duration metric: took 12.728783ms waiting for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.264670   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264700   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.282371   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282400   61400 pod_ready.go:81] duration metric: took 17.657706ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.282410   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282416   61400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.288986   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289016   61400 pod_ready.go:81] duration metric: took 6.590018ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.289028   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289036   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.391318   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391358   61400 pod_ready.go:81] duration metric: took 102.309139ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.391371   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391390   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.790147   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790184   61400 pod_ready.go:81] duration metric: took 398.776559ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.790202   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790213   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.190088   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190118   61400 pod_ready.go:81] duration metric: took 399.895826ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.190132   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190146   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.590412   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590470   61400 pod_ready.go:81] duration metric: took 400.308646ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.590484   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590494   61400 pod_ready.go:38] duration metric: took 1.349028144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:22.590533   61400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:22.610035   61400 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:22.610060   61400 kubeadm.go:640] restartCluster took 21.598991094s
	I0103 20:14:22.610071   61400 kubeadm.go:406] StartCluster complete in 21.660680377s
	I0103 20:14:22.610091   61400 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.610178   61400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:22.613053   61400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.613314   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:22.613472   61400 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:22.613563   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:14:22.613570   61400 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613584   61400 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613597   61400 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613625   61400 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-927922"
	W0103 20:14:22.613637   61400 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:22.613639   61400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-927922"
	I0103 20:14:22.613605   61400 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-927922"
	W0103 20:14:22.613706   61400 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:22.613769   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.613691   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.614097   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614129   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614170   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614204   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614293   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614334   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.631032   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0103 20:14:22.631689   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.632149   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.632172   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.632553   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.632811   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0103 20:14:22.632820   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0103 20:14:22.633222   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633340   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.633352   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633385   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.633695   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.633719   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634106   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634117   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.634139   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634544   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634711   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.634782   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.634821   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.639076   61400 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-927922"
	W0103 20:14:22.639233   61400 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:22.639274   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.640636   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.640703   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.653581   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0103 20:14:22.654135   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.654693   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.654720   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.655050   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.655267   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.655611   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45149
	I0103 20:14:22.656058   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.656503   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.656527   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.656976   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.657189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.657904   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.660090   61400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:22.659044   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.659283   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0103 20:14:22.663010   61400 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.663022   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:22.663037   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.664758   61400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:22.663341   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.665665   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666177   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.666201   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666255   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:22.666266   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:22.666282   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.666382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.666505   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.666726   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.666884   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.666901   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.666926   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.667344   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.667940   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.667983   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.668718   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.668933   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.668961   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.669116   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.669262   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.669388   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.669506   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.711545   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0103 20:14:22.711969   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.712493   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.712519   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.712853   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.713077   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.715086   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.715371   61400 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.715390   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:22.715405   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.718270   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718638   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.718671   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718876   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.719076   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.719263   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.719451   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.795601   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.887631   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:22.887660   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:22.889717   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.932293   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:22.932324   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:22.939480   61400 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:22.979425   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:22.979455   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:23.017495   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:23.255786   61400 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-927922" context rescaled to 1 replicas
	I0103 20:14:23.255832   61400 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:23.257785   61400 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:18.937821   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.435750   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.438082   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.259380   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:23.430371   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430402   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430532   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430557   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430710   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.430741   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.430778   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.430798   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430806   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432333   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432345   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432353   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432363   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432373   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.432382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432383   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432394   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432602   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432654   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432674   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438313   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.438335   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.438566   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.438585   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438662   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.598304   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598363   61400 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:23.598669   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598687   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598696   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598705   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598917   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598938   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598960   61400 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-927922"
	I0103 20:14:23.601038   61400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:14:21.253707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.254276   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.399352   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.895781   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.602562   61400 addons.go:508] enable addons completed in 989.095706ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:25.602268   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:27.602561   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:25.439366   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:27.934938   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:25.753982   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.253688   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:26.398696   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.896789   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:29.603040   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:30.102640   61400 node_ready.go:49] node "old-k8s-version-927922" has status "Ready":"True"
	I0103 20:14:30.102663   61400 node_ready.go:38] duration metric: took 6.504277703s waiting for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:30.102672   61400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.107593   61400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112792   61400 pod_ready.go:92] pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.112817   61400 pod_ready.go:81] duration metric: took 5.195453ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112828   61400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117802   61400 pod_ready.go:92] pod "etcd-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.117827   61400 pod_ready.go:81] duration metric: took 4.989616ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117839   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123548   61400 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.123570   61400 pod_ready.go:81] duration metric: took 5.723206ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123580   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128232   61400 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.128257   61400 pod_ready.go:81] duration metric: took 4.670196ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128269   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503735   61400 pod_ready.go:92] pod "kube-proxy-jk7jw" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.503782   61400 pod_ready.go:81] duration metric: took 375.504442ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503796   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903117   61400 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.903145   61400 pod_ready.go:81] duration metric: took 399.341883ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903155   61400 pod_ready.go:38] duration metric: took 800.474934ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.903167   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:30.903215   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:30.917506   61400 api_server.go:72] duration metric: took 7.661640466s to wait for apiserver process to appear ...
	I0103 20:14:30.917537   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:30.917558   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:30.923921   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:30.924810   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:30.924830   61400 api_server.go:131] duration metric: took 7.286806ms to wait for apiserver health ...
	I0103 20:14:30.924837   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:31.105108   61400 system_pods.go:59] 7 kube-system pods found
	I0103 20:14:31.105140   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.105144   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.105149   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.105153   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.105156   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.105160   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.105164   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.105168   61400 system_pods.go:74] duration metric: took 180.326535ms to wait for pod list to return data ...
	I0103 20:14:31.105176   61400 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:14:31.303919   61400 default_sa.go:45] found service account: "default"
	I0103 20:14:31.303945   61400 default_sa.go:55] duration metric: took 198.763782ms for default service account to be created ...
	I0103 20:14:31.303952   61400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:14:31.504913   61400 system_pods.go:86] 7 kube-system pods found
	I0103 20:14:31.504942   61400 system_pods.go:89] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.504948   61400 system_pods.go:89] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.504952   61400 system_pods.go:89] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.504960   61400 system_pods.go:89] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.504964   61400 system_pods.go:89] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.504967   61400 system_pods.go:89] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.504971   61400 system_pods.go:89] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.504978   61400 system_pods.go:126] duration metric: took 201.020363ms to wait for k8s-apps to be running ...
	I0103 20:14:31.504987   61400 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:14:31.505042   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:31.519544   61400 system_svc.go:56] duration metric: took 14.547054ms WaitForService to wait for kubelet.
	I0103 20:14:31.519581   61400 kubeadm.go:581] duration metric: took 8.263723255s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:14:31.519604   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:31.703367   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:31.703393   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:31.703402   61400 node_conditions.go:105] duration metric: took 183.794284ms to run NodePressure ...
	I0103 20:14:31.703413   61400 start.go:228] waiting for startup goroutines ...
	I0103 20:14:31.703419   61400 start.go:233] waiting for cluster config update ...
	I0103 20:14:31.703427   61400 start.go:242] writing updated cluster config ...
	I0103 20:14:31.703726   61400 ssh_runner.go:195] Run: rm -f paused
	I0103 20:14:31.752491   61400 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0103 20:14:31.754609   61400 out.go:177] 
	W0103 20:14:31.756132   61400 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0103 20:14:31.757531   61400 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0103 20:14:31.758908   61400 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-927922" cluster and "default" namespace by default
	I0103 20:14:29.937557   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.437025   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.253875   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.752584   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.898036   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:33.398935   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.936535   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.436533   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.753233   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.252419   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.253992   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:35.896170   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.897520   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:40.397608   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.438748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.439514   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.254480   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.756719   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:42.397869   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:44.398305   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.935597   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:45.936279   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:47.939184   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.253445   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:48.254497   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.896653   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:49.395106   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.436008   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:52.436929   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.754391   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.253984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:51.396664   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.895621   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:54.937380   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.435980   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:55.254262   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.254379   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:56.399473   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:58.895378   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.436517   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:01.436644   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:03.437289   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.754343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.256605   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:00.896080   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.896456   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.396614   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.935218   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.936528   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:04.753320   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:06.753702   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:08.754470   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.909774   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.398298   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.435847   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.436285   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.755735   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:13.260340   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.898368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.395141   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:14.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:16.437752   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.753850   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.252984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:17.396224   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:19.396412   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.935744   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.936627   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:22.937157   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.753996   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.252893   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:21.396466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.396556   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.435441   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.437177   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.253294   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.257573   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.895526   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.897999   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:30.396749   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.935811   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:31.936769   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.754895   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.252296   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.252439   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.398706   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.895914   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.435649   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.435937   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.253151   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.753045   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.897764   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:39.395522   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.935209   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.935922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:42.936185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.753242   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.254160   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:41.395722   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.895476   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:44.938043   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.436185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.753607   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.757575   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.895628   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.898831   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.395366   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:49.437057   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:51.936658   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.254313   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.754096   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.396047   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:54.896005   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:53.937359   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.939092   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:58.435858   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.253159   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:57.752873   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:56.897368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.397094   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:00.937099   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:02.937220   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.753924   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.754227   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:04.253189   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.895645   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:03.895950   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:05.435964   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:07.437247   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.753405   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.252564   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.395775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:08.397119   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.937945   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:12.436531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:11.254482   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.753409   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:10.898350   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.397549   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:14.936753   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.438482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.753689   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:18.253420   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.895365   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.897998   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.898464   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.935559   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:21.935664   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:20.253748   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.253878   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.254457   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.395466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.400100   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:23.935958   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:25.936631   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:28.436748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.752881   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.253740   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.897228   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.396925   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:30.436921   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:32.939573   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.254681   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.759891   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.895948   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.899819   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:35.436828   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:37.437536   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.252972   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.254083   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.396572   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.895816   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:39.440085   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:41.939589   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.752960   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:42.753342   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.897788   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:43.396277   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.437295   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:46.934854   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.753613   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.253118   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:45.896539   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.897012   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:50.399452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:48.936795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:51.435353   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:53.436742   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:49.753890   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.252908   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.253390   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.895504   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.896960   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:55.937358   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.435997   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.256446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.754312   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.898710   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.899652   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:00.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:02.936336   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.254343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.754483   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.398833   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.896269   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.437531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.935848   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.755471   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.756171   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.897369   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:08.397436   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:09.936237   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:11.940482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.253599   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:12.254176   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.254316   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.898370   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:13.400421   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.436922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.936283   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.753503   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.253120   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:15.896003   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:18.396552   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.438479   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.936957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.253522   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.752947   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:20.895961   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.395452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:24.435005   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437797   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437828   61676 pod_ready.go:81] duration metric: took 4m0.009294112s waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:17:26.437841   61676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:17:26.437850   61676 pod_ready.go:38] duration metric: took 4m1.606787831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:17:26.437868   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:17:26.437901   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:26.437951   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:26.499917   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.499942   61676 cri.go:89] found id: ""
	I0103 20:17:26.499958   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:26.500014   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.504239   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:26.504290   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:26.539965   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.539992   61676 cri.go:89] found id: ""
	I0103 20:17:26.540001   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:26.540052   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.544591   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:26.544667   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:26.583231   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:26.583256   61676 cri.go:89] found id: ""
	I0103 20:17:26.583265   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:26.583328   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.587642   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:26.587705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:26.625230   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.625258   61676 cri.go:89] found id: ""
	I0103 20:17:26.625267   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:26.625329   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.629448   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:26.629527   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:26.666698   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:26.666726   61676 cri.go:89] found id: ""
	I0103 20:17:26.666736   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:26.666796   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.671434   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:26.671500   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:26.703900   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:26.703921   61676 cri.go:89] found id: ""
	I0103 20:17:26.703929   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:26.703986   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.707915   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:26.707979   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:26.747144   61676 cri.go:89] found id: ""
	I0103 20:17:26.747168   61676 logs.go:284] 0 containers: []
	W0103 20:17:26.747182   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:26.747189   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:26.747246   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:26.786418   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:26.786441   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:26.786448   61676 cri.go:89] found id: ""
	I0103 20:17:26.786456   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:26.786515   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.790506   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.794304   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:26.794330   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:26.851272   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:26.851317   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.894480   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:26.894508   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.941799   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:26.941826   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.981759   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:26.981793   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:27.021318   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:27.021347   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:27.061320   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:27.061351   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:27.110137   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:27.110169   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:27.123548   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:27.123582   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:27.162644   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:27.162678   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:27.211599   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:27.211636   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:27.361299   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:27.361329   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:27.866123   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:27.866166   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:25.753957   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:27.754559   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:25.896204   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:28.395594   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.418870   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:17:30.433778   61676 api_server.go:72] duration metric: took 4m12.637164197s to wait for apiserver process to appear ...
	I0103 20:17:30.433801   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:17:30.433838   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:30.433911   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:30.472309   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:30.472337   61676 cri.go:89] found id: ""
	I0103 20:17:30.472348   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:30.472407   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.476787   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:30.476858   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:30.522290   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:30.522322   61676 cri.go:89] found id: ""
	I0103 20:17:30.522334   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:30.522390   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.526502   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:30.526581   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:30.568301   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.568328   61676 cri.go:89] found id: ""
	I0103 20:17:30.568335   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:30.568382   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.572398   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:30.572454   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:30.611671   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:30.611694   61676 cri.go:89] found id: ""
	I0103 20:17:30.611702   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:30.611749   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.615971   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:30.616035   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:30.658804   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:30.658830   61676 cri.go:89] found id: ""
	I0103 20:17:30.658839   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:30.658889   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.662859   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:30.662930   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:30.705941   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:30.705968   61676 cri.go:89] found id: ""
	I0103 20:17:30.705976   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:30.706031   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.710228   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:30.710308   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:30.749052   61676 cri.go:89] found id: ""
	I0103 20:17:30.749077   61676 logs.go:284] 0 containers: []
	W0103 20:17:30.749088   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:30.749096   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:30.749157   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:30.786239   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.786267   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.786273   61676 cri.go:89] found id: ""
	I0103 20:17:30.786280   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:30.786341   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.790680   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.794294   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:30.794320   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.835916   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:30.835952   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.876225   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:30.876255   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.917657   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:30.917684   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:30.930805   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:30.930831   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:31.060049   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:31.060086   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:31.119725   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:31.119754   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:31.164226   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:31.164261   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:31.204790   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:31.204816   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:31.264949   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:31.264984   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:31.658178   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:31.658217   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:31.712090   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:31.712125   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:31.757333   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:31.757364   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:30.253170   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.753056   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.896380   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.896512   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:35.399775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:34.304692   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:17:34.311338   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:17:34.312603   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:17:34.312624   61676 api_server.go:131] duration metric: took 3.878815002s to wait for apiserver health ...
	I0103 20:17:34.312632   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:17:34.312651   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:34.312705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:34.347683   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.347701   61676 cri.go:89] found id: ""
	I0103 20:17:34.347711   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:34.347769   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.351695   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:34.351773   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:34.386166   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.386188   61676 cri.go:89] found id: ""
	I0103 20:17:34.386197   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:34.386259   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.390352   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:34.390427   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:34.427772   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.427801   61676 cri.go:89] found id: ""
	I0103 20:17:34.427811   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:34.427872   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.432258   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:34.432324   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:34.471746   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.471789   61676 cri.go:89] found id: ""
	I0103 20:17:34.471812   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:34.471878   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.476656   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:34.476729   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:34.514594   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:34.514626   61676 cri.go:89] found id: ""
	I0103 20:17:34.514685   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:34.514779   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.518789   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:34.518849   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:34.555672   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.555698   61676 cri.go:89] found id: ""
	I0103 20:17:34.555707   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:34.555771   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.560278   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:34.560338   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:34.598718   61676 cri.go:89] found id: ""
	I0103 20:17:34.598742   61676 logs.go:284] 0 containers: []
	W0103 20:17:34.598753   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:34.598759   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:34.598810   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:34.635723   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:34.635751   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:34.635758   61676 cri.go:89] found id: ""
	I0103 20:17:34.635767   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:34.635814   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.640466   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.644461   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:34.644490   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:34.659819   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:34.659856   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.697807   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:34.697840   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.745366   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:34.745397   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.804885   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:34.804919   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.848753   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:34.848784   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.891492   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:34.891524   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:35.234093   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:35.234133   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:35.281396   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:35.281425   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:35.317595   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:35.317622   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:35.357552   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:35.357600   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:35.405369   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:35.405394   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:35.459496   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:35.459535   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:38.101844   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:17:38.101870   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.101875   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.101879   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.101886   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.101892   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.101898   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.101907   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.101919   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.101931   61676 system_pods.go:74] duration metric: took 3.789293156s to wait for pod list to return data ...
	I0103 20:17:38.101940   61676 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:17:38.104360   61676 default_sa.go:45] found service account: "default"
	I0103 20:17:38.104386   61676 default_sa.go:55] duration metric: took 2.437157ms for default service account to be created ...
	I0103 20:17:38.104395   61676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:17:38.110198   61676 system_pods.go:86] 8 kube-system pods found
	I0103 20:17:38.110226   61676 system_pods.go:89] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.110233   61676 system_pods.go:89] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.110241   61676 system_pods.go:89] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.110247   61676 system_pods.go:89] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.110254   61676 system_pods.go:89] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.110262   61676 system_pods.go:89] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.110272   61676 system_pods.go:89] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.110287   61676 system_pods.go:89] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.110300   61676 system_pods.go:126] duration metric: took 5.897003ms to wait for k8s-apps to be running ...
	I0103 20:17:38.110310   61676 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:17:38.110359   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:17:38.129025   61676 system_svc.go:56] duration metric: took 18.705736ms WaitForService to wait for kubelet.
	I0103 20:17:38.129071   61676 kubeadm.go:581] duration metric: took 4m20.332460734s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:17:38.129104   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:17:38.132674   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:17:38.132703   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:17:38.132718   61676 node_conditions.go:105] duration metric: took 3.608193ms to run NodePressure ...
	I0103 20:17:38.132803   61676 start.go:228] waiting for startup goroutines ...
	I0103 20:17:38.132830   61676 start.go:233] waiting for cluster config update ...
	I0103 20:17:38.132846   61676 start.go:242] writing updated cluster config ...
	I0103 20:17:38.133198   61676 ssh_runner.go:195] Run: rm -f paused
	I0103 20:17:38.185728   61676 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:17:38.187862   61676 out.go:177] * Done! kubectl is now configured to use "embed-certs-451331" cluster and "default" namespace by default
	I0103 20:17:34.753175   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.254091   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.896317   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:40.396299   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:39.752580   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:41.755418   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:44.253073   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:42.897389   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:45.396646   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:46.253958   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:48.753284   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:47.398164   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:49.895246   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:50.754133   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.253046   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:51.895627   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.897877   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:55.254029   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:57.752707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:56.398655   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:58.897483   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:59.753306   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:01.753500   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:02.255901   62015 pod_ready.go:81] duration metric: took 4m0.010124972s waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:02.255929   62015 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:02.255939   62015 pod_ready.go:38] duration metric: took 4m4.070078749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:02.255957   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:02.255989   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:02.256064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:02.312578   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:02.312606   62015 cri.go:89] found id: ""
	I0103 20:18:02.312616   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:02.312679   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.317969   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:02.318064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:02.361423   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.361451   62015 cri.go:89] found id: ""
	I0103 20:18:02.361464   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:02.361527   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.365691   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:02.365772   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:02.415087   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:02.415118   62015 cri.go:89] found id: ""
	I0103 20:18:02.415128   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:02.415188   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.419409   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:02.419493   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:02.459715   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:02.459744   62015 cri.go:89] found id: ""
	I0103 20:18:02.459754   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:02.459816   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.464105   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:02.464186   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:02.515523   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:02.515547   62015 cri.go:89] found id: ""
	I0103 20:18:02.515556   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:02.515619   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.519586   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:02.519646   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:02.561187   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.561210   62015 cri.go:89] found id: ""
	I0103 20:18:02.561219   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:02.561288   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.566206   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:02.566289   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:02.610993   62015 cri.go:89] found id: ""
	I0103 20:18:02.611019   62015 logs.go:284] 0 containers: []
	W0103 20:18:02.611029   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:02.611036   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:02.611111   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:02.651736   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:02.651764   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:02.651771   62015 cri.go:89] found id: ""
	I0103 20:18:02.651779   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:02.651839   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.656284   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.660614   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:02.660636   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.707759   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:02.707804   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.766498   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:02.766551   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:03.227838   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:03.227884   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:03.269131   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:03.269174   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:03.307383   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:03.307410   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:03.362005   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:03.362043   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:03.412300   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:03.412333   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:03.448896   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:03.448922   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:03.587950   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:03.587982   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:03.629411   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:03.629438   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:03.672468   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:03.672499   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:03.685645   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:03.685682   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:01.395689   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:03.396256   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:06.229417   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:06.244272   62015 api_server.go:72] duration metric: took 4m15.901019711s to wait for apiserver process to appear ...
	I0103 20:18:06.244306   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:06.244351   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:06.244412   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:06.292204   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.292235   62015 cri.go:89] found id: ""
	I0103 20:18:06.292246   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:06.292309   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.296724   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:06.296791   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:06.333984   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.334012   62015 cri.go:89] found id: ""
	I0103 20:18:06.334023   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:06.334079   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.338045   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:06.338123   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:06.374586   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:06.374610   62015 cri.go:89] found id: ""
	I0103 20:18:06.374617   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:06.374669   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.378720   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:06.378792   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:06.416220   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:06.416240   62015 cri.go:89] found id: ""
	I0103 20:18:06.416247   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:06.416300   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.420190   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:06.420247   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:06.458725   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:06.458745   62015 cri.go:89] found id: ""
	I0103 20:18:06.458754   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:06.458808   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.462703   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:06.462759   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:06.504559   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:06.504587   62015 cri.go:89] found id: ""
	I0103 20:18:06.504596   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:06.504659   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.508602   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:06.508662   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:06.559810   62015 cri.go:89] found id: ""
	I0103 20:18:06.559833   62015 logs.go:284] 0 containers: []
	W0103 20:18:06.559840   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:06.559846   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:06.559905   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:06.598672   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.598697   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.598704   62015 cri.go:89] found id: ""
	I0103 20:18:06.598712   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:06.598766   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.602828   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.607033   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:06.607050   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:06.758606   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:06.758634   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.797521   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:06.797552   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:06.856126   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:06.856159   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.902629   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:06.902656   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.953115   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:06.953154   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.993311   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:06.993342   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:07.393614   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:07.393655   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:07.408367   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:07.408397   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:07.446725   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:07.446756   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:07.494564   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:07.494595   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:07.529151   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:07.529176   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:07.577090   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:07.577118   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:05.895682   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:08.395751   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.396488   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.133806   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:18:10.138606   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:18:10.139965   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:18:10.139986   62015 api_server.go:131] duration metric: took 3.895673488s to wait for apiserver health ...
	I0103 20:18:10.140004   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:10.140032   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.140078   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.177309   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.177336   62015 cri.go:89] found id: ""
	I0103 20:18:10.177347   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:10.177398   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.181215   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.181287   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:10.217151   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.217174   62015 cri.go:89] found id: ""
	I0103 20:18:10.217183   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:10.217242   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.221363   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:10.221447   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:10.271359   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:10.271387   62015 cri.go:89] found id: ""
	I0103 20:18:10.271397   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:10.271460   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.277366   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:10.277439   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:10.325567   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:10.325594   62015 cri.go:89] found id: ""
	I0103 20:18:10.325604   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:10.325662   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.331222   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:10.331292   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:10.370488   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:10.370516   62015 cri.go:89] found id: ""
	I0103 20:18:10.370539   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:10.370598   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.375213   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:10.375272   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:10.417606   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.417626   62015 cri.go:89] found id: ""
	I0103 20:18:10.417633   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:10.417678   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.421786   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:10.421848   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:10.459092   62015 cri.go:89] found id: ""
	I0103 20:18:10.459119   62015 logs.go:284] 0 containers: []
	W0103 20:18:10.459129   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:10.459136   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:10.459184   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:10.504845   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:10.504874   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.504879   62015 cri.go:89] found id: ""
	I0103 20:18:10.504886   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:10.504935   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.509189   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.513671   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:10.513692   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.553961   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:10.553988   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:10.606422   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:10.606463   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:10.620647   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:10.620677   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.678322   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:10.678358   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:10.806514   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:10.806569   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:10.862551   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:10.862589   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.917533   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:10.917566   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.988668   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:10.988702   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:11.030485   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.030549   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:11.425651   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:11.425686   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:11.481991   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:11.482019   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:11.526299   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:11.526335   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:14.082821   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:14.082847   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.082853   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.082857   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.082861   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.082865   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.082870   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.082876   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.082881   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.082887   62015 system_pods.go:74] duration metric: took 3.942878112s to wait for pod list to return data ...
	I0103 20:18:14.082893   62015 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:14.087079   62015 default_sa.go:45] found service account: "default"
	I0103 20:18:14.087106   62015 default_sa.go:55] duration metric: took 4.207195ms for default service account to be created ...
	I0103 20:18:14.087115   62015 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:14.094161   62015 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:14.094185   62015 system_pods.go:89] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.094190   62015 system_pods.go:89] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.094195   62015 system_pods.go:89] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.094199   62015 system_pods.go:89] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.094204   62015 system_pods.go:89] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.094208   62015 system_pods.go:89] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.094219   62015 system_pods.go:89] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.094231   62015 system_pods.go:89] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.094244   62015 system_pods.go:126] duration metric: took 7.123869ms to wait for k8s-apps to be running ...
	I0103 20:18:14.094256   62015 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:14.094305   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:14.110365   62015 system_svc.go:56] duration metric: took 16.099582ms WaitForService to wait for kubelet.
	I0103 20:18:14.110400   62015 kubeadm.go:581] duration metric: took 4m23.767155373s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:14.110439   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:14.113809   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:14.113833   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:14.113842   62015 node_conditions.go:105] duration metric: took 3.394645ms to run NodePressure ...
	I0103 20:18:14.113853   62015 start.go:228] waiting for startup goroutines ...
	I0103 20:18:14.113859   62015 start.go:233] waiting for cluster config update ...
	I0103 20:18:14.113868   62015 start.go:242] writing updated cluster config ...
	I0103 20:18:14.114102   62015 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:14.163090   62015 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0103 20:18:14.165173   62015 out.go:177] * Done! kubectl is now configured to use "no-preload-749210" cluster and "default" namespace by default
	I0103 20:18:10.896026   62050 pod_ready.go:81] duration metric: took 4m0.007814497s waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:10.896053   62050 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:10.896062   62050 pod_ready.go:38] duration metric: took 4m4.550955933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:10.896076   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:10.896109   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.896169   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.965458   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:10.965485   62050 cri.go:89] found id: ""
	I0103 20:18:10.965494   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:10.965552   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.970818   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.970890   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:11.014481   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.014509   62050 cri.go:89] found id: ""
	I0103 20:18:11.014537   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:11.014602   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.019157   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:11.019220   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:11.068101   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:11.068129   62050 cri.go:89] found id: ""
	I0103 20:18:11.068138   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:11.068202   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.075018   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:11.075098   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:11.122838   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.122862   62050 cri.go:89] found id: ""
	I0103 20:18:11.122871   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:11.122925   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.128488   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:11.128563   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:11.178133   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:11.178160   62050 cri.go:89] found id: ""
	I0103 20:18:11.178170   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:11.178233   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.182823   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:11.182900   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:11.229175   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.229207   62050 cri.go:89] found id: ""
	I0103 20:18:11.229218   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:11.229271   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.238617   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:11.238686   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:11.289070   62050 cri.go:89] found id: ""
	I0103 20:18:11.289107   62050 logs.go:284] 0 containers: []
	W0103 20:18:11.289115   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:11.289121   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:11.289204   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:11.333342   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.333365   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.333370   62050 cri.go:89] found id: ""
	I0103 20:18:11.333376   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:11.333430   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.338236   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.342643   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:11.342668   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:11.395443   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:11.395471   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:11.561224   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:11.561258   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.619642   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:11.619677   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.656329   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:11.656370   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.710651   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:11.710685   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.756839   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:11.756866   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.791885   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:11.791920   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:11.805161   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.805185   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:12.261916   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:12.261973   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:12.316486   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:12.316525   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:12.367998   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:12.368032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:12.404277   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:12.404316   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:14.943727   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:14.959322   62050 api_server.go:72] duration metric: took 4m14.593955756s to wait for apiserver process to appear ...
	I0103 20:18:14.959344   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:14.959384   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:14.959443   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:15.001580   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.001613   62050 cri.go:89] found id: ""
	I0103 20:18:15.001624   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:15.001688   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.005964   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:15.006044   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:15.043364   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:15.043393   62050 cri.go:89] found id: ""
	I0103 20:18:15.043403   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:15.043461   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.047226   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:15.047291   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:15.091700   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.091727   62050 cri.go:89] found id: ""
	I0103 20:18:15.091736   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:15.091794   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.095953   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:15.096038   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:15.132757   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:15.132785   62050 cri.go:89] found id: ""
	I0103 20:18:15.132796   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:15.132856   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.137574   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:15.137637   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:15.174799   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:15.174827   62050 cri.go:89] found id: ""
	I0103 20:18:15.174836   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:15.174893   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.179052   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:15.179119   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:15.218730   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:15.218761   62050 cri.go:89] found id: ""
	I0103 20:18:15.218770   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:15.218829   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.222730   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:15.222796   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:15.265020   62050 cri.go:89] found id: ""
	I0103 20:18:15.265046   62050 logs.go:284] 0 containers: []
	W0103 20:18:15.265053   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:15.265059   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:15.265122   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:15.307032   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:15.307059   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.307065   62050 cri.go:89] found id: ""
	I0103 20:18:15.307074   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:15.307132   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.311275   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.315089   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:15.315113   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:15.361815   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:15.361840   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:15.493913   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:15.493947   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.553841   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:15.553881   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.590885   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:15.590911   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.630332   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:15.630357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:16.074625   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:16.074659   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:16.133116   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:16.133161   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:16.147559   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:16.147585   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:16.199131   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:16.199167   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:16.238085   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:16.238116   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:16.294992   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:16.295032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:16.333862   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:16.333896   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:18.875707   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:18:18.882546   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:18:18.884633   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:18:18.884662   62050 api_server.go:131] duration metric: took 3.925311693s to wait for apiserver health ...
	I0103 20:18:18.884672   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:18.884701   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:18.884765   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:18.922149   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:18.922170   62050 cri.go:89] found id: ""
	I0103 20:18:18.922177   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:18.922223   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.926886   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:18.926952   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:18.970009   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:18.970035   62050 cri.go:89] found id: ""
	I0103 20:18:18.970043   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:18.970107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.974349   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:18.974413   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:19.016970   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:19.016994   62050 cri.go:89] found id: ""
	I0103 20:18:19.017004   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:19.017069   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.021899   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:19.021959   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:19.076044   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:19.076074   62050 cri.go:89] found id: ""
	I0103 20:18:19.076081   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:19.076134   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.081699   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:19.081775   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:19.120022   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:19.120046   62050 cri.go:89] found id: ""
	I0103 20:18:19.120053   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:19.120107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.124627   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:19.124698   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:19.165431   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:19.165453   62050 cri.go:89] found id: ""
	I0103 20:18:19.165463   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:19.165513   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.170214   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:19.170282   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:19.208676   62050 cri.go:89] found id: ""
	I0103 20:18:19.208706   62050 logs.go:284] 0 containers: []
	W0103 20:18:19.208716   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:19.208724   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:19.208782   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:19.246065   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:19.246092   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:19.246101   62050 cri.go:89] found id: ""
	I0103 20:18:19.246109   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:19.246169   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.250217   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.259598   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:19.259628   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:19.643718   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:19.643755   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:19.697873   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:19.697905   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:19.762995   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:19.763030   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:19.830835   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:19.830871   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:19.969465   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:19.969501   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:20.011269   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:20.011301   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:20.059317   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:20.059352   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:20.099428   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:20.099455   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:20.135773   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:20.135809   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:20.149611   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:20.149649   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:20.190742   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:20.190788   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:20.241115   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:20.241142   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:22.789475   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:22.789502   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.789507   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.789512   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.789516   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.789520   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.789527   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.789533   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.789538   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.789544   62050 system_pods.go:74] duration metric: took 3.904866616s to wait for pod list to return data ...
	I0103 20:18:22.789551   62050 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:22.791976   62050 default_sa.go:45] found service account: "default"
	I0103 20:18:22.792000   62050 default_sa.go:55] duration metric: took 2.444229ms for default service account to be created ...
	I0103 20:18:22.792007   62050 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:22.797165   62050 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:22.797186   62050 system_pods.go:89] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.797192   62050 system_pods.go:89] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.797196   62050 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.797200   62050 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.797204   62050 system_pods.go:89] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.797209   62050 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.797221   62050 system_pods.go:89] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.797234   62050 system_pods.go:89] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.797244   62050 system_pods.go:126] duration metric: took 5.231578ms to wait for k8s-apps to be running ...
	I0103 20:18:22.797256   62050 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:22.797303   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:22.811467   62050 system_svc.go:56] duration metric: took 14.201511ms WaitForService to wait for kubelet.
	I0103 20:18:22.811503   62050 kubeadm.go:581] duration metric: took 4m22.446143128s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:22.811533   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:22.814594   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:22.814617   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:22.814629   62050 node_conditions.go:105] duration metric: took 3.089727ms to run NodePressure ...
	I0103 20:18:22.814639   62050 start.go:228] waiting for startup goroutines ...
	I0103 20:18:22.814645   62050 start.go:233] waiting for cluster config update ...
	I0103 20:18:22.814654   62050 start.go:242] writing updated cluster config ...
	I0103 20:18:22.814897   62050 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:22.864761   62050 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:18:22.866755   62050 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018788" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 20:13:21 UTC, ends at Wed 2024-01-03 20:27:24 UTC. --
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.471960382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313644471945680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7f2fcc2b-0b0d-495a-982f-d7a59773b7a7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.472575808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d4e8d178-92db-4c31-949f-3986100a49a0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.472653370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d4e8d178-92db-4c31-949f-3986100a49a0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.472912106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312873014332423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c392fb14a91e9f4a6643252d5dfac2e1c164e9206980da27ef53a85db6c130d1,PodSandboxId:baccf7a16fdfeb12fcac098e455733f670ea9f2b569244440ea0b56862308b6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312848149211593,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfdaacfb-b339-488d-968b-537870733563,},Annotations:map[string]string{io.kubernetes.container.hash: 31b9b4da,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06,PodSandboxId:56ca6ee8a63f137f2292a05567f59fb92b958a01dcda968d2dbdbafaf2508be9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312845035625348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zxzqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d066762e-7e1f-4b3a-9b21-6a7a3ca53edd,},Annotations:map[string]string{io.kubernetes.container.hash: a758356f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312840086461478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032,PodSandboxId:042f1c9914efd103d02790491b12b041d9d6cbf9db26cda3fda0bf0ece589ea5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312840119281646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
e5a1b04-4bce-4111-bfe8-2adb2f947d78,},Annotations:map[string]string{io.kubernetes.container.hash: f4f4cb38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c,PodSandboxId:9f7b2686f78ddceb890ed734bc51b694db7a26c7a3bf42bfc886fee3a075b9ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312830458531175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 303ebd0fe046fe6897895a41da889b48,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d,PodSandboxId:12f7cedbe223b2e50b1a66b12ed22ca457c8fd6662f93528652b9057ada4433f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312830383378086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4aa49e06c8498ad02035a6a3c854470,},An
notations:map[string]string{io.kubernetes.container.hash: d09eccde,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b,PodSandboxId:f0c80a0255d704e395ebdab78a059b1716a87371444af6e50a4ec1b42ec3ae0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312829916887775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f53e8f2639e05aaf76598b82d388a7f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc,PodSandboxId:16b3c8945f86cea9f3be3272d2381a6e4e036988c3e66976cad2be3ccff0ff8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312829748694586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
1c440e3088352f1d026b9319d0fd133,},Annotations:map[string]string{io.kubernetes.container.hash: a6c6c5d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d4e8d178-92db-4c31-949f-3986100a49a0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.515643795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=13385f02-01e6-4d48-83fa-10aff5cd61b1 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.515725661Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=13385f02-01e6-4d48-83fa-10aff5cd61b1 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.517169124Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=524bc4f1-c2a4-44ea-99e7-cc7a0ee751de name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.517687131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313644517667189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=524bc4f1-c2a4-44ea-99e7-cc7a0ee751de name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.518446605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4909b67c-96c1-4333-a441-001f11034457 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.518648302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4909b67c-96c1-4333-a441-001f11034457 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.518981272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312873014332423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c392fb14a91e9f4a6643252d5dfac2e1c164e9206980da27ef53a85db6c130d1,PodSandboxId:baccf7a16fdfeb12fcac098e455733f670ea9f2b569244440ea0b56862308b6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312848149211593,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfdaacfb-b339-488d-968b-537870733563,},Annotations:map[string]string{io.kubernetes.container.hash: 31b9b4da,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06,PodSandboxId:56ca6ee8a63f137f2292a05567f59fb92b958a01dcda968d2dbdbafaf2508be9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312845035625348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zxzqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d066762e-7e1f-4b3a-9b21-6a7a3ca53edd,},Annotations:map[string]string{io.kubernetes.container.hash: a758356f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312840086461478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032,PodSandboxId:042f1c9914efd103d02790491b12b041d9d6cbf9db26cda3fda0bf0ece589ea5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312840119281646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
e5a1b04-4bce-4111-bfe8-2adb2f947d78,},Annotations:map[string]string{io.kubernetes.container.hash: f4f4cb38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c,PodSandboxId:9f7b2686f78ddceb890ed734bc51b694db7a26c7a3bf42bfc886fee3a075b9ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312830458531175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 303ebd0fe046fe6897895a41da889b48,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d,PodSandboxId:12f7cedbe223b2e50b1a66b12ed22ca457c8fd6662f93528652b9057ada4433f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312830383378086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4aa49e06c8498ad02035a6a3c854470,},An
notations:map[string]string{io.kubernetes.container.hash: d09eccde,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b,PodSandboxId:f0c80a0255d704e395ebdab78a059b1716a87371444af6e50a4ec1b42ec3ae0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312829916887775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f53e8f2639e05aaf76598b82d388a7f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc,PodSandboxId:16b3c8945f86cea9f3be3272d2381a6e4e036988c3e66976cad2be3ccff0ff8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312829748694586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
1c440e3088352f1d026b9319d0fd133,},Annotations:map[string]string{io.kubernetes.container.hash: a6c6c5d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4909b67c-96c1-4333-a441-001f11034457 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.561170493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=78e92552-cd84-4af0-88b4-49e6da202b6b name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.561237864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=78e92552-cd84-4af0-88b4-49e6da202b6b name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.562769594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e907faca-4cb7-4eba-a31b-6d2bc8b440de name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.563270491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313644563248832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e907faca-4cb7-4eba-a31b-6d2bc8b440de name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.563894543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9250773d-10d2-4a80-b18e-4d7fa7aaac00 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.563963371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9250773d-10d2-4a80-b18e-4d7fa7aaac00 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.564192738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312873014332423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c392fb14a91e9f4a6643252d5dfac2e1c164e9206980da27ef53a85db6c130d1,PodSandboxId:baccf7a16fdfeb12fcac098e455733f670ea9f2b569244440ea0b56862308b6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312848149211593,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfdaacfb-b339-488d-968b-537870733563,},Annotations:map[string]string{io.kubernetes.container.hash: 31b9b4da,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06,PodSandboxId:56ca6ee8a63f137f2292a05567f59fb92b958a01dcda968d2dbdbafaf2508be9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312845035625348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zxzqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d066762e-7e1f-4b3a-9b21-6a7a3ca53edd,},Annotations:map[string]string{io.kubernetes.container.hash: a758356f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312840086461478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032,PodSandboxId:042f1c9914efd103d02790491b12b041d9d6cbf9db26cda3fda0bf0ece589ea5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312840119281646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
e5a1b04-4bce-4111-bfe8-2adb2f947d78,},Annotations:map[string]string{io.kubernetes.container.hash: f4f4cb38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c,PodSandboxId:9f7b2686f78ddceb890ed734bc51b694db7a26c7a3bf42bfc886fee3a075b9ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312830458531175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 303ebd0fe046fe6897895a41da889b48,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d,PodSandboxId:12f7cedbe223b2e50b1a66b12ed22ca457c8fd6662f93528652b9057ada4433f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312830383378086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4aa49e06c8498ad02035a6a3c854470,},An
notations:map[string]string{io.kubernetes.container.hash: d09eccde,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b,PodSandboxId:f0c80a0255d704e395ebdab78a059b1716a87371444af6e50a4ec1b42ec3ae0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312829916887775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f53e8f2639e05aaf76598b82d388a7f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc,PodSandboxId:16b3c8945f86cea9f3be3272d2381a6e4e036988c3e66976cad2be3ccff0ff8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312829748694586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
1c440e3088352f1d026b9319d0fd133,},Annotations:map[string]string{io.kubernetes.container.hash: a6c6c5d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9250773d-10d2-4a80-b18e-4d7fa7aaac00 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.598411707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=48b9576a-7945-4dc0-b352-f0e1cf2002f5 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.598492555Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=48b9576a-7945-4dc0-b352-f0e1cf2002f5 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.599596637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9c573de2-b438-49d3-9141-db439134558a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.599951649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313644599939259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9c573de2-b438-49d3-9141-db439134558a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.600764245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c65bdde-170f-4c74-9c63-602bf4c00fa5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.600829034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c65bdde-170f-4c74-9c63-602bf4c00fa5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:27:24 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:27:24.601051128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312873014332423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c392fb14a91e9f4a6643252d5dfac2e1c164e9206980da27ef53a85db6c130d1,PodSandboxId:baccf7a16fdfeb12fcac098e455733f670ea9f2b569244440ea0b56862308b6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312848149211593,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfdaacfb-b339-488d-968b-537870733563,},Annotations:map[string]string{io.kubernetes.container.hash: 31b9b4da,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06,PodSandboxId:56ca6ee8a63f137f2292a05567f59fb92b958a01dcda968d2dbdbafaf2508be9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312845035625348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zxzqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d066762e-7e1f-4b3a-9b21-6a7a3ca53edd,},Annotations:map[string]string{io.kubernetes.container.hash: a758356f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312840086461478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032,PodSandboxId:042f1c9914efd103d02790491b12b041d9d6cbf9db26cda3fda0bf0ece589ea5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312840119281646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
e5a1b04-4bce-4111-bfe8-2adb2f947d78,},Annotations:map[string]string{io.kubernetes.container.hash: f4f4cb38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c,PodSandboxId:9f7b2686f78ddceb890ed734bc51b694db7a26c7a3bf42bfc886fee3a075b9ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312830458531175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 303ebd0fe046fe6897895a41da889b48,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d,PodSandboxId:12f7cedbe223b2e50b1a66b12ed22ca457c8fd6662f93528652b9057ada4433f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312830383378086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4aa49e06c8498ad02035a6a3c854470,},An
notations:map[string]string{io.kubernetes.container.hash: d09eccde,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b,PodSandboxId:f0c80a0255d704e395ebdab78a059b1716a87371444af6e50a4ec1b42ec3ae0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312829916887775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f53e8f2639e05aaf76598b82d388a7f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc,PodSandboxId:16b3c8945f86cea9f3be3272d2381a6e4e036988c3e66976cad2be3ccff0ff8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312829748694586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
1c440e3088352f1d026b9319d0fd133,},Annotations:map[string]string{io.kubernetes.container.hash: a6c6c5d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c65bdde-170f-4c74-9c63-602bf4c00fa5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d1fa8b05cd7c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   be6527d03445d       storage-provisioner
	c392fb14a91e9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   baccf7a16fdfe       busybox
	e2370f79911fd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   56ca6ee8a63f1       coredns-5dd5756b68-zxzqg
	b1525243614b0       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   042f1c9914efd       kube-proxy-wqjlv
	365147e198ba5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   be6527d03445d       storage-provisioner
	abbaa7d1ca858       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   9f7b2686f78dd       kube-scheduler-default-k8s-diff-port-018788
	3bacdb6bf6624       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   12f7cedbe223b       etcd-default-k8s-diff-port-018788
	2b7de3342fdb5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   f0c80a0255d70       kube-controller-manager-default-k8s-diff-port-018788
	ce56b3ad3d4b7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   16b3c8945f86c       kube-apiserver-default-k8s-diff-port-018788
	
	
	==> coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35653 - 12446 "HINFO IN 3385961418125871742.7974406874081081189. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010827111s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-018788
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-018788
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=default-k8s-diff-port-018788
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_05_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:05:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-018788
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:27:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:24:38 +0000   Wed, 03 Jan 2024 20:05:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:24:38 +0000   Wed, 03 Jan 2024 20:05:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:24:38 +0000   Wed, 03 Jan 2024 20:05:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:24:38 +0000   Wed, 03 Jan 2024 20:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    default-k8s-diff-port-018788
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ba1e9c471d0427a84d508ddb34683ca
	  System UUID:                8ba1e9c4-71d0-427a-84d5-08ddb34683ca
	  Boot ID:                    8385a80b-b061-486f-9fd0-c93e71e2403d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-zxzqg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-018788                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-018788             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-018788    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-wqjlv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-018788             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-pgbbj                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-018788 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-018788 event: Registered Node default-k8s-diff-port-018788 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-018788 event: Registered Node default-k8s-diff-port-018788 in Controller
	
	
	==> dmesg <==
	[Jan 3 20:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.666113] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.054375] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.130012] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000009] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.401020] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000080] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.643595] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.101420] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.157078] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.123171] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.235849] systemd-fstab-generator[709]: Ignoring "noauto" for root device
	[ +17.156848] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Jan 3 20:14] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] <==
	{"level":"info","ts":"2024-01-03T20:13:58.700338Z","caller":"traceutil/trace.go:171","msg":"trace[945785892] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:564; }","duration":"987.325063ms","start":"2024-01-03T20:13:57.713005Z","end":"2024-01-03T20:13:58.70033Z","steps":["trace[945785892] 'range keys from in-memory index tree'  (duration: 987.06468ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:58.700459Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:57.71299Z","time spent":"987.460969ms","remote":"127.0.0.1:39188","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":240,"request content":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" "}
	{"level":"warn","ts":"2024-01-03T20:13:59.207314Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15793434297366553486,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-01-03T20:13:59.569775Z","caller":"traceutil/trace.go:171","msg":"trace[1158227821] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"864.458702ms","start":"2024-01-03T20:13:58.705295Z","end":"2024-01-03T20:13:59.569754Z","steps":["trace[1158227821] 'process raft request'  (duration: 864.18892ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.570431Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:58.705281Z","time spent":"864.618129ms","remote":"127.0.0.1:39160","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":969,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-pgbbj.17a6ef7ec432b3e4\" mod_revision:547 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-pgbbj.17a6ef7ec432b3e4\" value_size:874 lease:6570062260511777395 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-pgbbj.17a6ef7ec432b3e4\" > >"}
	{"level":"info","ts":"2024-01-03T20:13:59.569786Z","caller":"traceutil/trace.go:171","msg":"trace[315153466] linearizableReadLoop","detail":"{readStateIndex:603; appliedIndex:602; }","duration":"862.543613ms","start":"2024-01-03T20:13:58.707166Z","end":"2024-01-03T20:13:59.56971Z","steps":["trace[315153466] 'read index received'  (duration: 862.209979ms)","trace[315153466] 'applied index is now lower than readState.Index'  (duration: 333.074µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:13:59.570828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"861.458577ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-018788\" ","response":"range_response_count:1 size:5461"}
	{"level":"info","ts":"2024-01-03T20:13:59.570852Z","caller":"traceutil/trace.go:171","msg":"trace[1720327201] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-018788; range_end:; response_count:1; response_revision:565; }","duration":"861.487607ms","start":"2024-01-03T20:13:58.709357Z","end":"2024-01-03T20:13:59.570845Z","steps":["trace[1720327201] 'agreement among raft nodes before linearized reading'  (duration: 861.411033ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.570871Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:58.709348Z","time spent":"861.518202ms","remote":"127.0.0.1:39184","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":5483,"request content":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-018788\" "}
	{"level":"warn","ts":"2024-01-03T20:13:59.57148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"864.322666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2024-01-03T20:13:59.571533Z","caller":"traceutil/trace.go:171","msg":"trace[1870821920] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pv-protection-controller; range_end:; response_count:1; response_revision:565; }","duration":"864.380988ms","start":"2024-01-03T20:13:58.707145Z","end":"2024-01-03T20:13:59.571526Z","steps":["trace[1870821920] 'agreement among raft nodes before linearized reading'  (duration: 863.464351ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.571578Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:58.707135Z","time spent":"864.432868ms","remote":"127.0.0.1:39188","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":236,"request content":"key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" "}
	{"level":"warn","ts":"2024-01-03T20:14:00.097299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.432294ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15793434297366553493 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" mod_revision:561 > success:<request_put:<key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" value_size:729 lease:6570062260511777395 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-03T20:14:00.098408Z","caller":"traceutil/trace.go:171","msg":"trace[1027757226] linearizableReadLoop","detail":"{readStateIndex:604; appliedIndex:603; }","duration":"516.697306ms","start":"2024-01-03T20:13:59.581694Z","end":"2024-01-03T20:14:00.098391Z","steps":["trace[1027757226] 'read index received'  (duration: 181.792383ms)","trace[1027757226] 'applied index is now lower than readState.Index'  (duration: 334.903556ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T20:14:00.098586Z","caller":"traceutil/trace.go:171","msg":"trace[238564223] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"518.806985ms","start":"2024-01-03T20:13:59.57977Z","end":"2024-01-03T20:14:00.098577Z","steps":["trace[238564223] 'process raft request'  (duration: 183.851432ms)","trace[238564223] 'compare'  (duration: 330.400221ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:14:00.098669Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.579757Z","time spent":"518.871861ms","remote":"127.0.0.1:39160","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":817,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" mod_revision:561 > success:<request_put:<key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" value_size:729 lease:6570062260511777395 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" > >"}
	{"level":"warn","ts":"2024-01-03T20:14:00.098922Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"517.234497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-018788\" ","response":"range_response_count:1 size:6780"}
	{"level":"info","ts":"2024-01-03T20:14:00.098986Z","caller":"traceutil/trace.go:171","msg":"trace[1159610439] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-018788; range_end:; response_count:1; response_revision:566; }","duration":"517.302039ms","start":"2024-01-03T20:13:59.581676Z","end":"2024-01-03T20:14:00.098978Z","steps":["trace[1159610439] 'agreement among raft nodes before linearized reading'  (duration: 517.168573ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:14:00.099026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.581666Z","time spent":"517.35421ms","remote":"127.0.0.1:39184","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":6802,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-018788\" "}
	{"level":"warn","ts":"2024-01-03T20:14:00.099317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"511.029754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ephemeral-volume-controller\" ","response":"range_response_count:1 size:220"}
	{"level":"info","ts":"2024-01-03T20:14:00.09974Z","caller":"traceutil/trace.go:171","msg":"trace[1912603648] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ephemeral-volume-controller; range_end:; response_count:1; response_revision:566; }","duration":"511.448791ms","start":"2024-01-03T20:13:59.588279Z","end":"2024-01-03T20:14:00.099728Z","steps":["trace[1912603648] 'agreement among raft nodes before linearized reading'  (duration: 510.99958ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:14:00.100011Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.588268Z","time spent":"511.658757ms","remote":"127.0.0.1:39188","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":242,"request content":"key:\"/registry/serviceaccounts/kube-system/ephemeral-volume-controller\" "}
	{"level":"info","ts":"2024-01-03T20:23:54.130271Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":856}
	{"level":"info","ts":"2024-01-03T20:23:54.133806Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":856,"took":"2.551329ms","hash":3697743643}
	{"level":"info","ts":"2024-01-03T20:23:54.133913Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3697743643,"revision":856,"compact-revision":-1}
	
	
	==> kernel <==
	 20:27:24 up 14 min,  0 users,  load average: 0.32, 0.27, 0.15
	Linux default-k8s-diff-port-018788 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] <==
	I0103 20:23:55.708824       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:23:56.709046       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:23:56.709172       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:23:56.709180       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:23:56.709323       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:23:56.709391       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:23:56.710680       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:24:55.587221       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:24:56.709739       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:24:56.709796       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:24:56.709805       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:24:56.710808       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:24:56.710912       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:24:56.710940       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:25:55.586445       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0103 20:26:55.587171       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:26:56.710166       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:26:56.710416       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:26:56.710426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:26:56.711381       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:26:56.711615       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:26:56.711623       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] <==
	I0103 20:21:41.309744       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:22:10.935424       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:22:11.318275       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:22:40.943613       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:22:41.326406       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:23:10.950998       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:23:11.336663       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:23:40.957200       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:23:41.347982       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:24:10.963233       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:24:11.357940       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:24:40.970061       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:24:41.367168       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:25:03.795488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="330.839µs"
	E0103 20:25:10.975977       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:25:11.376748       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:25:16.805553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="211.68µs"
	E0103 20:25:40.981945       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:25:41.385744       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:26:10.987793       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:26:11.392975       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:26:40.995049       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:26:41.402318       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:27:11.001313       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:27:11.411927       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] <==
	I0103 20:14:02.083263       1 server_others.go:69] "Using iptables proxy"
	I0103 20:14:02.110269       1 node.go:141] Successfully retrieved node IP: 192.168.39.139
	I0103 20:14:02.214438       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0103 20:14:02.214519       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 20:14:02.217341       1 server_others.go:152] "Using iptables Proxier"
	I0103 20:14:02.217536       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 20:14:02.217714       1 server.go:846] "Version info" version="v1.28.4"
	I0103 20:14:02.221329       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:14:02.222827       1 config.go:188] "Starting service config controller"
	I0103 20:14:02.222908       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 20:14:02.222965       1 config.go:97] "Starting endpoint slice config controller"
	I0103 20:14:02.222986       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 20:14:02.228416       1 config.go:315] "Starting node config controller"
	I0103 20:14:02.228492       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 20:14:02.323511       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 20:14:02.323616       1 shared_informer.go:318] Caches are synced for service config
	I0103 20:14:02.328743       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] <==
	I0103 20:13:52.770655       1 serving.go:348] Generated self-signed cert in-memory
	W0103 20:13:55.641836       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:13:55.641943       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:13:55.641988       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:13:55.642013       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:13:55.727971       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0103 20:13:55.728190       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:55.731897       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:13:55.731989       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:13:55.734983       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:13:55.735213       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:13:55.834363       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 20:13:21 UTC, ends at Wed 2024-01-03 20:27:25 UTC. --
	Jan 03 20:24:48 default-k8s-diff-port-018788 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:24:48 default-k8s-diff-port-018788 kubelet[930]: E0103 20:24:48.803627     930 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 03 20:24:48 default-k8s-diff-port-018788 kubelet[930]: E0103 20:24:48.804940     930 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 03 20:24:48 default-k8s-diff-port-018788 kubelet[930]: E0103 20:24:48.805267     930 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2pb95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-pgbbj_kube-system(ee3963d9-1627-4e78-91e5-1f92c2011f4b): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 03 20:24:48 default-k8s-diff-port-018788 kubelet[930]: E0103 20:24:48.805340     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:25:03 default-k8s-diff-port-018788 kubelet[930]: E0103 20:25:03.775904     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:25:16 default-k8s-diff-port-018788 kubelet[930]: E0103 20:25:16.783005     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:25:29 default-k8s-diff-port-018788 kubelet[930]: E0103 20:25:29.775232     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:25:41 default-k8s-diff-port-018788 kubelet[930]: E0103 20:25:41.775341     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:25:48 default-k8s-diff-port-018788 kubelet[930]: E0103 20:25:48.794867     930 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:25:48 default-k8s-diff-port-018788 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:25:48 default-k8s-diff-port-018788 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:25:48 default-k8s-diff-port-018788 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:25:56 default-k8s-diff-port-018788 kubelet[930]: E0103 20:25:56.778428     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:26:10 default-k8s-diff-port-018788 kubelet[930]: E0103 20:26:10.776436     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:26:23 default-k8s-diff-port-018788 kubelet[930]: E0103 20:26:23.776197     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:26:35 default-k8s-diff-port-018788 kubelet[930]: E0103 20:26:35.775994     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:26:46 default-k8s-diff-port-018788 kubelet[930]: E0103 20:26:46.776825     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:26:48 default-k8s-diff-port-018788 kubelet[930]: E0103 20:26:48.792701     930 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:26:48 default-k8s-diff-port-018788 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:26:48 default-k8s-diff-port-018788 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:26:48 default-k8s-diff-port-018788 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:26:59 default-k8s-diff-port-018788 kubelet[930]: E0103 20:26:59.776392     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:27:10 default-k8s-diff-port-018788 kubelet[930]: E0103 20:27:10.776365     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:27:22 default-k8s-diff-port-018788 kubelet[930]: E0103 20:27:22.775982     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	
	
	==> storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] <==
	I0103 20:14:01.995617       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0103 20:14:32.036225       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] <==
	I0103 20:14:33.158472       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:14:33.170158       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:14:33.170273       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:14:50.584462       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:14:50.584776       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-018788_a2335e3e-d422-40a0-ba4c-1fdc7c29325b!
	I0103 20:14:50.587381       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9d22760a-d369-4f87-9839-fef853b9b5b7", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-018788_a2335e3e-d422-40a0-ba4c-1fdc7c29325b became leader
	I0103 20:14:50.685054       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-018788_a2335e3e-d422-40a0-ba4c-1fdc7c29325b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-018788 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pgbbj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-018788 describe pod metrics-server-57f55c9bc5-pgbbj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-018788 describe pod metrics-server-57f55c9bc5-pgbbj: exit status 1 (69.356691ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pgbbj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-018788 describe pod metrics-server-57f55c9bc5-pgbbj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (523.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0103 20:23:50.790943   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:24:07.102377   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 20:24:09.452262   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:24:21.013456   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:24:48.942581   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:24:55.579502   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:25:30.151847   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 20:25:32.496950   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:25:48.654340   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:25:55.308237   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 20:26:30.038950   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-927922 -n old-k8s-version-927922
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-03 20:32:15.416207557 +0000 UTC m=+5700.888784554
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-927922 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-927922 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.489µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-927922 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-927922 -n old-k8s-version-927922
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-927922 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-927922 logs -n 25: (1.627131996s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-719541 sudo cat                              | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo find                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo crio                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-719541                                       | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-350596 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | disable-driver-mounts-350596                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:06 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-927922        | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451331            | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-749210             | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018788  | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-927922             | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC | 03 Jan 24 20:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451331                 | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC | 03 Jan 24 20:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-749210                  | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018788       | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:09:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:09:05.502375   62050 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:09:05.502548   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502558   62050 out.go:309] Setting ErrFile to fd 2...
	I0103 20:09:05.502566   62050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:09:05.502759   62050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:09:05.503330   62050 out.go:303] Setting JSON to false
	I0103 20:09:05.504222   62050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6693,"bootTime":1704305853,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 20:09:05.504283   62050 start.go:138] virtualization: kvm guest
	I0103 20:09:05.507002   62050 out.go:177] * [default-k8s-diff-port-018788] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 20:09:05.508642   62050 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:09:05.508667   62050 notify.go:220] Checking for updates...
	I0103 20:09:05.510296   62050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:09:05.511927   62050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:09:05.513487   62050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:09:05.515064   62050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 20:09:05.516515   62050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:09:05.518301   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:09:05.518774   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.518827   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.533730   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0103 20:09:05.534098   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.534667   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.534699   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.535027   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.535298   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.535543   62050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:09:05.535823   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:09:05.535855   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:09:05.549808   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I0103 20:09:05.550147   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:09:05.550708   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:09:05.550733   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:09:05.551041   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:09:05.551258   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:09:05.583981   62050 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 20:09:05.585560   62050 start.go:298] selected driver: kvm2
	I0103 20:09:05.585580   62050 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.585707   62050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:09:05.586411   62050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.586494   62050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 20:09:05.601346   62050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 20:09:05.601747   62050 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 20:09:05.601812   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:09:05.601828   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:09:05.601839   62050 start_flags.go:323] config:
	{Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-01878
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:09:05.602011   62050 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:05.604007   62050 out.go:177] * Starting control plane node default-k8s-diff-port-018788 in cluster default-k8s-diff-port-018788
	I0103 20:09:03.174819   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:06.246788   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:04.840696   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:09:04.840826   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:09:04.840950   62015 cache.go:107] acquiring lock: {Name:mk76774936d94ce826f83ee0faaaf3557831e6bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.840994   62015 cache.go:107] acquiring lock: {Name:mk25b47a2b083e99837dbc206b0832b20d7da669 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841017   62015 cache.go:107] acquiring lock: {Name:mk0a26120b5274bc796f1ae286da54dda262a5a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841059   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0103 20:09:04.841064   62015 start.go:365] acquiring machines lock for no-preload-749210: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:04.841070   62015 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 128.344µs
	I0103 20:09:04.841078   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0103 20:09:04.841081   62015 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841085   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0103 20:09:04.840951   62015 cache.go:107] acquiring lock: {Name:mk372d2259ddc4c784d2a14a7416ba9b749d6f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841089   62015 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 97.811µs
	I0103 20:09:04.841093   62015 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 87.964µs
	I0103 20:09:04.841108   62015 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0103 20:09:04.841109   62015 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0103 20:09:04.841115   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 20:09:04.841052   62015 cache.go:107] acquiring lock: {Name:mk04d21d7cdef9332755ef804a44022ba9c4a8c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841129   62015 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 185.143µs
	I0103 20:09:04.841155   62015 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 20:09:04.841139   62015 cache.go:107] acquiring lock: {Name:mk5c34e1c9b00efde01e776962411ad1105596ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841183   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0103 20:09:04.841203   62015 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 176.832µs
	I0103 20:09:04.841212   62015 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0103 20:09:04.841400   62015 cache.go:107] acquiring lock: {Name:mk0ae9e390d74a93289bc4e45b5511dce57beeb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841216   62015 cache.go:107] acquiring lock: {Name:mkccb08ee6224be0e6786052f4bebc8d21ec8a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:09:04.841614   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0103 20:09:04.841633   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0103 20:09:04.841675   62015 cache.go:115] /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0103 20:09:04.841679   62015 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 497.325µs
	I0103 20:09:04.841672   62015 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 557.891µs
	I0103 20:09:04.841716   62015 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841696   62015 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 499.205µs
	I0103 20:09:04.841745   62015 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841706   62015 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0103 20:09:04.841755   62015 cache.go:87] Successfully saved all images to host disk.
	I0103 20:09:05.605517   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:09:05.605574   62050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 20:09:05.605590   62050 cache.go:56] Caching tarball of preloaded images
	I0103 20:09:05.605669   62050 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 20:09:05.605681   62050 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:09:05.605787   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:09:05.605973   62050 start.go:365] acquiring machines lock for default-k8s-diff-port-018788: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:09:12.326805   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:15.398807   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:21.478760   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:24.550821   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:30.630841   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:33.702766   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:39.782732   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:42.854926   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:48.934815   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:52.006845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:09:58.086804   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:01.158903   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:07.238808   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:10.310897   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:16.390869   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:19.462833   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:25.542866   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:28.614753   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:34.694867   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:37.766876   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:43.846838   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:46.918843   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:52.998853   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:10:56.070822   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:02.150825   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:05.222884   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:11.302787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:14.374818   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:20.454810   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:23.526899   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:29.606842   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:32.678789   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:38.758787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:41.830855   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:47.910801   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:50.982868   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:11:57.062889   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:00.134834   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:06.214856   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:09.286845   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:15.366787   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:18.438756   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:24.518814   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:27.590887   61400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.12:22: connect: no route to host
	I0103 20:12:30.594981   61676 start.go:369] acquired machines lock for "embed-certs-451331" in 3m56.986277612s
	I0103 20:12:30.595030   61676 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:30.595039   61676 fix.go:54] fixHost starting: 
	I0103 20:12:30.595434   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:30.595466   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:30.609917   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0103 20:12:30.610302   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:30.610819   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:12:30.610845   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:30.611166   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:30.611348   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:30.611486   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:12:30.613108   61676 fix.go:102] recreateIfNeeded on embed-certs-451331: state=Stopped err=<nil>
	I0103 20:12:30.613128   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	W0103 20:12:30.613291   61676 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:30.615194   61676 out.go:177] * Restarting existing kvm2 VM for "embed-certs-451331" ...
	I0103 20:12:30.592855   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:30.592889   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:12:30.594843   61400 machine.go:91] provisioned docker machine in 4m37.406324683s
	I0103 20:12:30.594886   61400 fix.go:56] fixHost completed within 4m37.42774841s
	I0103 20:12:30.594892   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 4m37.427764519s
	W0103 20:12:30.594913   61400 start.go:694] error starting host: provision: host is not running
	W0103 20:12:30.595005   61400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0103 20:12:30.595014   61400 start.go:709] Will try again in 5 seconds ...
	I0103 20:12:30.616365   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Start
	I0103 20:12:30.616513   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring networks are active...
	I0103 20:12:30.617380   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network default is active
	I0103 20:12:30.617718   61676 main.go:141] libmachine: (embed-certs-451331) Ensuring network mk-embed-certs-451331 is active
	I0103 20:12:30.618103   61676 main.go:141] libmachine: (embed-certs-451331) Getting domain xml...
	I0103 20:12:30.618735   61676 main.go:141] libmachine: (embed-certs-451331) Creating domain...
	I0103 20:12:31.839751   61676 main.go:141] libmachine: (embed-certs-451331) Waiting to get IP...
	I0103 20:12:31.840608   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:31.841035   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:31.841117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:31.841008   62575 retry.go:31] will retry after 303.323061ms: waiting for machine to come up
	I0103 20:12:32.146508   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.147005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.147037   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.146950   62575 retry.go:31] will retry after 240.92709ms: waiting for machine to come up
	I0103 20:12:32.389487   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.389931   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.389962   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.389887   62575 retry.go:31] will retry after 473.263026ms: waiting for machine to come up
	I0103 20:12:32.864624   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:32.865060   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:32.865082   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:32.865006   62575 retry.go:31] will retry after 473.373684ms: waiting for machine to come up
	I0103 20:12:33.339691   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.340156   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.340189   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.340098   62575 retry.go:31] will retry after 639.850669ms: waiting for machine to come up
	I0103 20:12:35.596669   61400 start.go:365] acquiring machines lock for old-k8s-version-927922: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:12:33.982104   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:33.982622   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:33.982655   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:33.982583   62575 retry.go:31] will retry after 589.282725ms: waiting for machine to come up
	I0103 20:12:34.573280   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:34.573692   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:34.573716   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:34.573639   62575 retry.go:31] will retry after 884.387817ms: waiting for machine to come up
	I0103 20:12:35.459819   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:35.460233   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:35.460287   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:35.460168   62575 retry.go:31] will retry after 1.326571684s: waiting for machine to come up
	I0103 20:12:36.788923   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:36.789429   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:36.789452   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:36.789395   62575 retry.go:31] will retry after 1.436230248s: waiting for machine to come up
	I0103 20:12:38.227994   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:38.228374   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:38.228397   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:38.228336   62575 retry.go:31] will retry after 2.127693351s: waiting for machine to come up
	I0103 20:12:40.358485   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:40.358968   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:40.358998   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:40.358912   62575 retry.go:31] will retry after 1.816116886s: waiting for machine to come up
	I0103 20:12:42.177782   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:42.178359   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:42.178390   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:42.178296   62575 retry.go:31] will retry after 3.199797073s: waiting for machine to come up
	I0103 20:12:45.381712   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:45.382053   61676 main.go:141] libmachine: (embed-certs-451331) DBG | unable to find current IP address of domain embed-certs-451331 in network mk-embed-certs-451331
	I0103 20:12:45.382075   61676 main.go:141] libmachine: (embed-certs-451331) DBG | I0103 20:12:45.381991   62575 retry.go:31] will retry after 3.573315393s: waiting for machine to come up
	I0103 20:12:50.159164   62015 start.go:369] acquired machines lock for "no-preload-749210" in 3m45.318070652s
	I0103 20:12:50.159226   62015 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:12:50.159235   62015 fix.go:54] fixHost starting: 
	I0103 20:12:50.159649   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:12:50.159688   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:12:50.176573   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0103 20:12:50.176998   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:12:50.177504   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:12:50.177529   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:12:50.177925   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:12:50.178125   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:12:50.178297   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:12:50.179850   62015 fix.go:102] recreateIfNeeded on no-preload-749210: state=Stopped err=<nil>
	I0103 20:12:50.179873   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	W0103 20:12:50.180066   62015 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:12:50.182450   62015 out.go:177] * Restarting existing kvm2 VM for "no-preload-749210" ...
	I0103 20:12:48.959159   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.959637   61676 main.go:141] libmachine: (embed-certs-451331) Found IP for machine: 192.168.50.197
	I0103 20:12:48.959655   61676 main.go:141] libmachine: (embed-certs-451331) Reserving static IP address...
	I0103 20:12:48.959666   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has current primary IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.960051   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.960073   61676 main.go:141] libmachine: (embed-certs-451331) DBG | skip adding static IP to network mk-embed-certs-451331 - found existing host DHCP lease matching {name: "embed-certs-451331", mac: "52:54:00:38:4a:19", ip: "192.168.50.197"}
	I0103 20:12:48.960086   61676 main.go:141] libmachine: (embed-certs-451331) Reserved static IP address: 192.168.50.197
	I0103 20:12:48.960101   61676 main.go:141] libmachine: (embed-certs-451331) Waiting for SSH to be available...
	I0103 20:12:48.960117   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Getting to WaitForSSH function...
	I0103 20:12:48.962160   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962443   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:48.962478   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:48.962611   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH client type: external
	I0103 20:12:48.962631   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa (-rw-------)
	I0103 20:12:48.962661   61676 main.go:141] libmachine: (embed-certs-451331) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:12:48.962681   61676 main.go:141] libmachine: (embed-certs-451331) DBG | About to run SSH command:
	I0103 20:12:48.962718   61676 main.go:141] libmachine: (embed-certs-451331) DBG | exit 0
	I0103 20:12:49.058790   61676 main.go:141] libmachine: (embed-certs-451331) DBG | SSH cmd err, output: <nil>: 
	I0103 20:12:49.059176   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetConfigRaw
	I0103 20:12:49.059838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.062025   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062407   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.062440   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.062697   61676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/config.json ...
	I0103 20:12:49.062878   61676 machine.go:88] provisioning docker machine ...
	I0103 20:12:49.062894   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.063097   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063258   61676 buildroot.go:166] provisioning hostname "embed-certs-451331"
	I0103 20:12:49.063278   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.063423   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.065735   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066121   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.066161   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.066328   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.066507   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066695   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.066860   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.067065   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.067455   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.067469   61676 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-451331 && echo "embed-certs-451331" | sudo tee /etc/hostname
	I0103 20:12:49.210431   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-451331
	
	I0103 20:12:49.210465   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.213162   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213503   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.213573   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.213682   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.213911   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214094   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.214289   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.214449   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.214837   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.214856   61676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-451331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-451331/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-451331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:12:49.350098   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:12:49.350134   61676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:12:49.350158   61676 buildroot.go:174] setting up certificates
	I0103 20:12:49.350172   61676 provision.go:83] configureAuth start
	I0103 20:12:49.350188   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetMachineName
	I0103 20:12:49.350497   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:49.352947   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353356   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.353387   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.353448   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.355701   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356005   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.356033   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.356183   61676 provision.go:138] copyHostCerts
	I0103 20:12:49.356241   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:12:49.356254   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:12:49.356322   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:12:49.356413   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:12:49.356421   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:12:49.356446   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:12:49.356506   61676 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:12:49.356513   61676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:12:49.356535   61676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:12:49.356587   61676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.embed-certs-451331 san=[192.168.50.197 192.168.50.197 localhost 127.0.0.1 minikube embed-certs-451331]
	I0103 20:12:49.413721   61676 provision.go:172] copyRemoteCerts
	I0103 20:12:49.413781   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:12:49.413804   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.416658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417143   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.417170   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.417420   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.417617   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.417814   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.417977   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.510884   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:12:49.533465   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0103 20:12:49.554895   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:12:49.576069   61676 provision.go:86] duration metric: configureAuth took 225.882364ms
	I0103 20:12:49.576094   61676 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:12:49.576310   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:12:49.576387   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.579119   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579413   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.579461   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.579590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.579780   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.579968   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.580121   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.580271   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:49.580591   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:49.580615   61676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:12:49.883159   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:12:49.883188   61676 machine.go:91] provisioned docker machine in 820.299871ms
	I0103 20:12:49.883199   61676 start.go:300] post-start starting for "embed-certs-451331" (driver="kvm2")
	I0103 20:12:49.883212   61676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:12:49.883239   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:49.883565   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:12:49.883599   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:49.886365   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886658   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:49.886695   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:49.886878   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:49.887091   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:49.887293   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:49.887468   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:49.985529   61676 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:12:49.989732   61676 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:12:49.989758   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:12:49.989820   61676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:12:49.989891   61676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:12:49.989981   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:12:49.999882   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:50.022936   61676 start.go:303] post-start completed in 139.710189ms
	I0103 20:12:50.022966   61676 fix.go:56] fixHost completed within 19.427926379s
	I0103 20:12:50.023002   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.025667   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.025940   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.025973   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.026212   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.026424   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.026838   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.027074   61676 main.go:141] libmachine: Using SSH client type: native
	I0103 20:12:50.027381   61676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I0103 20:12:50.027393   61676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:12:50.159031   61676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312770.110466062
	
	I0103 20:12:50.159053   61676 fix.go:206] guest clock: 1704312770.110466062
	I0103 20:12:50.159061   61676 fix.go:219] Guest: 2024-01-03 20:12:50.110466062 +0000 UTC Remote: 2024-01-03 20:12:50.022969488 +0000 UTC m=+256.568741537 (delta=87.496574ms)
	I0103 20:12:50.159083   61676 fix.go:190] guest clock delta is within tolerance: 87.496574ms
	I0103 20:12:50.159089   61676 start.go:83] releasing machines lock for "embed-certs-451331", held for 19.564082089s
	I0103 20:12:50.159117   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.159421   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:50.162216   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162550   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.162577   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.162762   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163248   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163433   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:12:50.163532   61676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:12:50.163583   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.163644   61676 ssh_runner.go:195] Run: cat /version.json
	I0103 20:12:50.163671   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:12:50.166588   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166753   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.166957   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.166987   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167192   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167329   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:50.167358   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:50.167362   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167500   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.167590   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:12:50.167684   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.167761   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:12:50.167905   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:12:50.168096   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:12:50.298482   61676 ssh_runner.go:195] Run: systemctl --version
	I0103 20:12:50.304252   61676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:12:50.442709   61676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:12:50.448879   61676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:12:50.448959   61676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:12:50.467183   61676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:12:50.467203   61676 start.go:475] detecting cgroup driver to use...
	I0103 20:12:50.467269   61676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:12:50.482438   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:12:50.493931   61676 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:12:50.493997   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:12:50.506860   61676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:12:50.519279   61676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:12:50.627391   61676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:12:50.748160   61676 docker.go:219] disabling docker service ...
	I0103 20:12:50.748220   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:12:50.760970   61676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:12:50.772252   61676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:12:50.889707   61676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:12:51.003794   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:12:51.016226   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:12:51.032543   61676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:12:51.032600   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.042477   61676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:12:51.042559   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.053103   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.063469   61676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:12:51.073912   61676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:12:51.083314   61676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:12:51.092920   61676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:12:51.092969   61676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:12:51.106690   61676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:12:51.115815   61676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:12:51.230139   61676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:12:51.413184   61676 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:12:51.413315   61676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:12:51.417926   61676 start.go:543] Will wait 60s for crictl version
	I0103 20:12:51.417988   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:12:51.421507   61676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:12:51.465370   61676 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:12:51.465453   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.519590   61676 ssh_runner.go:195] Run: crio --version
	I0103 20:12:51.582633   61676 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:12:51.583888   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetIP
	I0103 20:12:51.587068   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587442   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:12:51.587486   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:12:51.587724   61676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0103 20:12:51.591798   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:51.602798   61676 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:12:51.602871   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:51.641736   61676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:12:51.641799   61676 ssh_runner.go:195] Run: which lz4
	I0103 20:12:51.645386   61676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:12:51.649168   61676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:12:51.649196   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:12:53.428537   61676 crio.go:444] Took 1.783185 seconds to copy over tarball
	I0103 20:12:53.428601   61676 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:12:50.183891   62015 main.go:141] libmachine: (no-preload-749210) Calling .Start
	I0103 20:12:50.184083   62015 main.go:141] libmachine: (no-preload-749210) Ensuring networks are active...
	I0103 20:12:50.184749   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network default is active
	I0103 20:12:50.185084   62015 main.go:141] libmachine: (no-preload-749210) Ensuring network mk-no-preload-749210 is active
	I0103 20:12:50.185435   62015 main.go:141] libmachine: (no-preload-749210) Getting domain xml...
	I0103 20:12:50.186067   62015 main.go:141] libmachine: (no-preload-749210) Creating domain...
	I0103 20:12:51.468267   62015 main.go:141] libmachine: (no-preload-749210) Waiting to get IP...
	I0103 20:12:51.469108   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.469584   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.469664   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.469570   62702 retry.go:31] will retry after 254.191618ms: waiting for machine to come up
	I0103 20:12:51.724958   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:51.725657   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:51.725683   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:51.725609   62702 retry.go:31] will retry after 279.489548ms: waiting for machine to come up
	I0103 20:12:52.007176   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.007682   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.007713   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.007628   62702 retry.go:31] will retry after 422.96552ms: waiting for machine to come up
	I0103 20:12:52.432345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.432873   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.432912   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.432844   62702 retry.go:31] will retry after 561.295375ms: waiting for machine to come up
	I0103 20:12:52.995438   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:52.995929   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:52.995963   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:52.995878   62702 retry.go:31] will retry after 547.962782ms: waiting for machine to come up
	I0103 20:12:53.545924   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:53.546473   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:53.546558   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:53.546453   62702 retry.go:31] will retry after 927.631327ms: waiting for machine to come up
	I0103 20:12:54.475549   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:54.476000   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:54.476046   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:54.475945   62702 retry.go:31] will retry after 880.192703ms: waiting for machine to come up
	I0103 20:12:56.224357   61676 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.795734066s)
	I0103 20:12:56.224386   61676 crio.go:451] Took 2.795820 seconds to extract the tarball
	I0103 20:12:56.224406   61676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:12:56.266955   61676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:12:56.318766   61676 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:12:56.318789   61676 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:12:56.318871   61676 ssh_runner.go:195] Run: crio config
	I0103 20:12:56.378376   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:12:56.378401   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:12:56.378423   61676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:12:56.378451   61676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.197 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-451331 NodeName:embed-certs-451331 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:12:56.378619   61676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-451331"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:12:56.378714   61676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-451331 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:12:56.378777   61676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:12:56.387967   61676 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:12:56.388037   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:12:56.396000   61676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0103 20:12:56.411880   61676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:12:56.427567   61676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0103 20:12:56.443342   61676 ssh_runner.go:195] Run: grep 192.168.50.197	control-plane.minikube.internal$ /etc/hosts
	I0103 20:12:56.446991   61676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:12:56.458659   61676 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331 for IP: 192.168.50.197
	I0103 20:12:56.458696   61676 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:12:56.458844   61676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:12:56.458904   61676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:12:56.459010   61676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/client.key
	I0103 20:12:56.459092   61676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key.d719e12a
	I0103 20:12:56.459159   61676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key
	I0103 20:12:56.459299   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:12:56.459341   61676 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:12:56.459358   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:12:56.459400   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:12:56.459434   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:12:56.459466   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:12:56.459522   61676 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:12:56.460408   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:12:56.481997   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:12:56.504016   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:12:56.526477   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/embed-certs-451331/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:12:56.548471   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:12:56.570763   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:12:56.592910   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:12:56.617765   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:12:56.646025   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:12:56.668629   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:12:56.690927   61676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:12:56.712067   61676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:12:56.727773   61676 ssh_runner.go:195] Run: openssl version
	I0103 20:12:56.733000   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:12:56.742921   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747499   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.747562   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:12:56.752732   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:12:56.762510   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:12:56.772401   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777123   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.777180   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:12:56.782490   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:12:56.793745   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:12:56.805156   61676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809897   61676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.809954   61676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:12:56.815432   61676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:12:56.826498   61676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:12:56.831012   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:12:56.837150   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:12:56.843256   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:12:56.849182   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:12:56.854882   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:12:56.862018   61676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:12:56.867863   61676 kubeadm.go:404] StartCluster: {Name:embed-certs-451331 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-451331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:12:56.867982   61676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:12:56.868029   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:12:56.909417   61676 cri.go:89] found id: ""
	I0103 20:12:56.909523   61676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:12:56.919487   61676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:12:56.919515   61676 kubeadm.go:636] restartCluster start
	I0103 20:12:56.919568   61676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:12:56.929137   61676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:56.930326   61676 kubeconfig.go:92] found "embed-certs-451331" server: "https://192.168.50.197:8443"
	I0103 20:12:56.932682   61676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:12:56.941846   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:56.941909   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:56.953616   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.442188   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.442281   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.458303   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:57.942905   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:57.942988   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:57.955860   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:58.442326   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.442420   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.454294   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:55.357897   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:55.358462   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:55.358492   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:55.358429   62702 retry.go:31] will retry after 1.158958207s: waiting for machine to come up
	I0103 20:12:56.518837   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:56.519260   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:56.519306   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:56.519224   62702 retry.go:31] will retry after 1.620553071s: waiting for machine to come up
	I0103 20:12:58.141980   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:58.142505   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:58.142549   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:58.142454   62702 retry.go:31] will retry after 1.525068593s: waiting for machine to come up
	I0103 20:12:59.670380   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:12:59.670880   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:12:59.670909   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:12:59.670827   62702 retry.go:31] will retry after 1.772431181s: waiting for machine to come up
	I0103 20:12:58.942887   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:58.942975   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:58.956781   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.442313   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.442402   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.455837   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:12:59.942355   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:12:59.942439   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:12:59.954326   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.441870   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.441960   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.454004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:00.941882   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:00.941995   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:00.958004   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.442573   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.442664   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.458604   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.942062   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:01.942170   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:01.958396   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.442928   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.443027   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.456612   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:02.941943   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:02.942056   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:02.953939   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:03.442552   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.442633   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.454840   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:01.445221   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:01.445608   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:01.445647   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:01.445565   62702 retry.go:31] will retry after 2.830747633s: waiting for machine to come up
	I0103 20:13:04.279514   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:04.279996   62015 main.go:141] libmachine: (no-preload-749210) DBG | unable to find current IP address of domain no-preload-749210 in network mk-no-preload-749210
	I0103 20:13:04.280020   62015 main.go:141] libmachine: (no-preload-749210) DBG | I0103 20:13:04.279963   62702 retry.go:31] will retry after 4.03880385s: waiting for machine to come up
	I0103 20:13:03.942687   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:03.942774   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:03.954714   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.442265   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.442357   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.454216   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:04.942877   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:04.942952   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:04.954944   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.442467   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.442596   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.454305   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:05.942383   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:05.942468   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:05.954074   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.442723   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.442811   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.454629   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.942200   61676 api_server.go:166] Checking apiserver status ...
	I0103 20:13:06.942283   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:06.953799   61676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:06.953829   61676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:06.953836   61676 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:06.953845   61676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:06.953904   61676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:06.989109   61676 cri.go:89] found id: ""
	I0103 20:13:06.989214   61676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:07.004822   61676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:07.014393   61676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:07.014454   61676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023669   61676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:07.023691   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.139277   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.626388   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.814648   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.901750   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:07.962623   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:07.962710   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.463820   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:08.322801   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323160   62015 main.go:141] libmachine: (no-preload-749210) Found IP for machine: 192.168.61.245
	I0103 20:13:08.323203   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has current primary IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.323222   62015 main.go:141] libmachine: (no-preload-749210) Reserving static IP address...
	I0103 20:13:08.323600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.323632   62015 main.go:141] libmachine: (no-preload-749210) Reserved static IP address: 192.168.61.245
	I0103 20:13:08.323664   62015 main.go:141] libmachine: (no-preload-749210) DBG | skip adding static IP to network mk-no-preload-749210 - found existing host DHCP lease matching {name: "no-preload-749210", mac: "52:54:00:fb:87:c7", ip: "192.168.61.245"}
	I0103 20:13:08.323684   62015 main.go:141] libmachine: (no-preload-749210) DBG | Getting to WaitForSSH function...
	I0103 20:13:08.323698   62015 main.go:141] libmachine: (no-preload-749210) Waiting for SSH to be available...
	I0103 20:13:08.325529   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325831   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.325863   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.325949   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH client type: external
	I0103 20:13:08.325977   62015 main.go:141] libmachine: (no-preload-749210) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa (-rw-------)
	I0103 20:13:08.326013   62015 main.go:141] libmachine: (no-preload-749210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:08.326030   62015 main.go:141] libmachine: (no-preload-749210) DBG | About to run SSH command:
	I0103 20:13:08.326053   62015 main.go:141] libmachine: (no-preload-749210) DBG | exit 0
	I0103 20:13:08.418368   62015 main.go:141] libmachine: (no-preload-749210) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:08.418718   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetConfigRaw
	I0103 20:13:08.419464   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.421838   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422172   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.422199   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.422460   62015 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/config.json ...
	I0103 20:13:08.422680   62015 machine.go:88] provisioning docker machine ...
	I0103 20:13:08.422702   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:08.422883   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423027   62015 buildroot.go:166] provisioning hostname "no-preload-749210"
	I0103 20:13:08.423047   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.423153   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.425105   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425377   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.425408   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.425583   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.425734   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425869   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.425987   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.426160   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.426488   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.426501   62015 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-749210 && echo "no-preload-749210" | sudo tee /etc/hostname
	I0103 20:13:08.579862   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-749210
	
	I0103 20:13:08.579892   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.583166   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583600   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.583635   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.583828   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:08.584039   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584225   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:08.584391   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:08.584593   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:08.584928   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:08.584954   62015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-749210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-749210/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-749210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:08.729661   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:08.729697   62015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:08.729738   62015 buildroot.go:174] setting up certificates
	I0103 20:13:08.729759   62015 provision.go:83] configureAuth start
	I0103 20:13:08.729776   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetMachineName
	I0103 20:13:08.730101   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:08.733282   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733694   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.733728   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.733868   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:08.736223   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736557   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:08.736589   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:08.736763   62015 provision.go:138] copyHostCerts
	I0103 20:13:08.736830   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:08.736847   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:08.736913   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:08.737035   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:08.737047   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:08.737077   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:08.737177   62015 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:08.737188   62015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:08.737218   62015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:08.737295   62015 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.no-preload-749210 san=[192.168.61.245 192.168.61.245 localhost 127.0.0.1 minikube no-preload-749210]
	I0103 20:13:09.018604   62015 provision.go:172] copyRemoteCerts
	I0103 20:13:09.018662   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:09.018684   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.021339   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021729   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.021777   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.021852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.022068   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.022220   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.022405   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.120023   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0103 20:13:09.143242   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:09.166206   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:09.192425   62015 provision.go:86] duration metric: configureAuth took 462.649611ms
	I0103 20:13:09.192457   62015 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:09.192678   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:09.192770   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.195193   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195594   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.195633   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.195852   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.196100   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196272   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.196437   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.196637   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.197028   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.197048   62015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:09.528890   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:09.528915   62015 machine.go:91] provisioned docker machine in 1.106221183s
	I0103 20:13:09.528924   62015 start.go:300] post-start starting for "no-preload-749210" (driver="kvm2")
	I0103 20:13:09.528949   62015 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:09.528966   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.529337   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:09.529372   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.532679   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533032   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.533063   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.533262   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.533490   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.533675   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.533841   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.632949   62015 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:09.638382   62015 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:09.638421   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:09.638502   62015 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:09.638617   62015 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:09.638744   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:09.650407   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:09.672528   62015 start.go:303] post-start completed in 143.577643ms
	I0103 20:13:09.672560   62015 fix.go:56] fixHost completed within 19.513324819s
	I0103 20:13:09.672585   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.675037   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675398   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.675430   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.675587   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.675811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.675963   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.676112   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.676294   62015 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:09.676674   62015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0103 20:13:09.676690   62015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:09.811720   62050 start.go:369] acquired machines lock for "default-k8s-diff-port-018788" in 4m4.205717121s
	I0103 20:13:09.811786   62050 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:09.811797   62050 fix.go:54] fixHost starting: 
	I0103 20:13:09.812213   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:09.812257   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:09.831972   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0103 20:13:09.832420   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:09.832973   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:13:09.833004   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:09.833345   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:09.833505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:09.833637   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:13:09.835476   62050 fix.go:102] recreateIfNeeded on default-k8s-diff-port-018788: state=Stopped err=<nil>
	I0103 20:13:09.835520   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	W0103 20:13:09.835689   62050 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:09.837499   62050 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018788" ...
	I0103 20:13:09.838938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Start
	I0103 20:13:09.839117   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring networks are active...
	I0103 20:13:09.839888   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network default is active
	I0103 20:13:09.840347   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Ensuring network mk-default-k8s-diff-port-018788 is active
	I0103 20:13:09.840765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Getting domain xml...
	I0103 20:13:09.841599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Creating domain...
	I0103 20:13:09.811571   62015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312789.764323206
	
	I0103 20:13:09.811601   62015 fix.go:206] guest clock: 1704312789.764323206
	I0103 20:13:09.811611   62015 fix.go:219] Guest: 2024-01-03 20:13:09.764323206 +0000 UTC Remote: 2024-01-03 20:13:09.672564299 +0000 UTC m=+244.986151230 (delta=91.758907ms)
	I0103 20:13:09.811636   62015 fix.go:190] guest clock delta is within tolerance: 91.758907ms
	I0103 20:13:09.811642   62015 start.go:83] releasing machines lock for "no-preload-749210", held for 19.652439302s
	I0103 20:13:09.811678   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.811949   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:09.815012   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815391   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.815429   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.815641   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816177   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816363   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:09.816471   62015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:09.816509   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.816620   62015 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:09.816646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:09.819652   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.819909   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820058   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820088   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820319   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:09.820345   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:09.820377   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820581   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820646   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:09.820753   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.820822   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:09.820910   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.821007   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:09.821131   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:09.949119   62015 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:09.956247   62015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:10.116715   62015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:10.122512   62015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:10.122640   62015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:10.142239   62015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:10.142265   62015 start.go:475] detecting cgroup driver to use...
	I0103 20:13:10.142336   62015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:10.159473   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:10.175492   62015 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:10.175555   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:10.191974   62015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:10.208639   62015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:10.343228   62015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:10.457642   62015 docker.go:219] disabling docker service ...
	I0103 20:13:10.457720   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:10.475117   62015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:10.491265   62015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:10.613064   62015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:10.741969   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:10.755923   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:10.775483   62015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:10.775550   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.785489   62015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:10.785557   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.795303   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.804763   62015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:10.814559   62015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:10.824431   62015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:10.833193   62015 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:10.833273   62015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:10.850446   62015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:10.861775   62015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:11.021577   62015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:11.217675   62015 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:11.217748   62015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:11.222475   62015 start.go:543] Will wait 60s for crictl version
	I0103 20:13:11.222552   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.226128   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:11.266681   62015 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:11.266775   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.313142   62015 ssh_runner.go:195] Run: crio --version
	I0103 20:13:11.358396   62015 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0103 20:13:08.963472   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.462836   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.963771   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:09.991718   61676 api_server.go:72] duration metric: took 2.029094062s to wait for apiserver process to appear ...
	I0103 20:13:09.991748   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:09.991769   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:09.992264   61676 api_server.go:269] stopped: https://192.168.50.197:8443/healthz: Get "https://192.168.50.197:8443/healthz": dial tcp 192.168.50.197:8443: connect: connection refused
	I0103 20:13:10.491803   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:11.359808   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetIP
	I0103 20:13:11.363074   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363434   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:11.363465   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:11.363695   62015 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:11.367689   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:11.378693   62015 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:13:11.378746   62015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:11.416544   62015 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0103 20:13:11.416570   62015 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:11.416642   62015 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.416698   62015 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.416724   62015 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.416699   62015 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.416929   62015 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0103 20:13:11.416671   62015 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.417054   62015 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.417093   62015 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.418600   62015 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.418621   62015 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.418630   62015 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0103 20:13:11.418646   62015 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.418661   62015 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.418675   62015 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.418685   62015 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.418697   62015 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.635223   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.662007   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.668522   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.671471   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.672069   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.685216   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0103 20:13:11.687462   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.716775   62015 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0103 20:13:11.716825   62015 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.716882   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.762358   62015 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0103 20:13:11.762394   62015 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:11.762463   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846225   62015 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0103 20:13:11.846268   62015 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.846317   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846432   62015 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0103 20:13:11.846473   62015 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.846529   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.846515   62015 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0103 20:13:11.846655   62015 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.846711   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956577   62015 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0103 20:13:11.956659   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0103 20:13:11.956689   62015 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:11.956746   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0103 20:13:11.956760   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:11.956782   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0103 20:13:11.956820   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0103 20:13:11.956873   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0103 20:13:12.064715   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0103 20:13:12.064764   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0103 20:13:12.064720   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.064856   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:12.064903   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.068647   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068685   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068752   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:12.068767   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.068771   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:12.068841   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:12.077600   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0103 20:13:12.077622   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077682   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0103 20:13:12.077798   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0103 20:13:12.109729   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109778   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109838   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0103 20:13:12.109927   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0103 20:13:12.110020   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:12.237011   62015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279507   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.201800359s)
	I0103 20:13:14.279592   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0103 20:13:14.279606   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.169553787s)
	I0103 20:13:14.279641   62015 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279646   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0103 20:13:14.279645   62015 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.042604307s)
	I0103 20:13:14.279725   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0103 20:13:14.279726   62015 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0103 20:13:14.279760   62015 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:14.279802   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:13:14.285860   62015 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:11.246503   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting to get IP...
	I0103 20:13:11.247669   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248203   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.248301   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.248165   62835 retry.go:31] will retry after 292.358185ms: waiting for machine to come up
	I0103 20:13:11.541836   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542224   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.542257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.542168   62835 retry.go:31] will retry after 370.634511ms: waiting for machine to come up
	I0103 20:13:11.914890   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915372   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:11.915403   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:11.915330   62835 retry.go:31] will retry after 304.80922ms: waiting for machine to come up
	I0103 20:13:12.221826   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222257   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.222289   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.222232   62835 retry.go:31] will retry after 534.177843ms: waiting for machine to come up
	I0103 20:13:12.757904   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:12.758422   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:12.758334   62835 retry.go:31] will retry after 749.166369ms: waiting for machine to come up
	I0103 20:13:13.509343   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509938   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:13.509984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:13.509854   62835 retry.go:31] will retry after 716.215015ms: waiting for machine to come up
	I0103 20:13:14.227886   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228388   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:14.228414   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:14.228338   62835 retry.go:31] will retry after 1.095458606s: waiting for machine to come up
	I0103 20:13:15.324880   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325299   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:15.325332   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:15.325250   62835 retry.go:31] will retry after 1.266878415s: waiting for machine to come up
	I0103 20:13:14.427035   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.427077   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.427119   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.462068   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:14.462115   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:14.492283   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.500354   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.500391   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:14.991910   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:14.997522   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:14.997550   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.492157   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:15.500340   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:15.500377   61676 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:15.992158   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:13:16.002940   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:13:16.020171   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:16.020205   61676 api_server.go:131] duration metric: took 6.028448633s to wait for apiserver health ...
	I0103 20:13:16.020216   61676 cni.go:84] Creating CNI manager for ""
	I0103 20:13:16.020226   61676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:16.022596   61676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:16.024514   61676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:16.064582   61676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:16.113727   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:16.124984   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:16.125031   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:16.125044   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:16.125061   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:16.125072   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:16.125086   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:16.125097   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:16.125111   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:16.125125   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:16.125140   61676 system_pods.go:74] duration metric: took 11.390906ms to wait for pod list to return data ...
	I0103 20:13:16.125152   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:16.133036   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:16.133072   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:16.133086   61676 node_conditions.go:105] duration metric: took 7.928329ms to run NodePressure ...
	I0103 20:13:16.133109   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:16.519151   61676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530359   61676 kubeadm.go:787] kubelet initialised
	I0103 20:13:16.530380   61676 kubeadm.go:788] duration metric: took 11.203465ms waiting for restarted kubelet to initialise ...
	I0103 20:13:16.530388   61676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:16.540797   61676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.550417   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550457   61676 pod_ready.go:81] duration metric: took 9.627239ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.550475   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.550486   61676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.557664   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557693   61676 pod_ready.go:81] duration metric: took 7.191907ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.557705   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "etcd-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.557721   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.566973   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567007   61676 pod_ready.go:81] duration metric: took 9.268451ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.567019   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.567027   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.587777   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587811   61676 pod_ready.go:81] duration metric: took 20.769874ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.587825   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.587832   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:16.923613   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923643   61676 pod_ready.go:81] duration metric: took 335.80096ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:16.923655   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-proxy-fsnb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:16.923663   61676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.323875   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323911   61676 pod_ready.go:81] duration metric: took 400.238515ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.323922   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.323931   61676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:17.724694   61676 pod_ready.go:97] node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724727   61676 pod_ready.go:81] duration metric: took 400.785148ms waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:17.724741   61676 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-451331" hosting pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:17.724750   61676 pod_ready.go:38] duration metric: took 1.194352759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:17.724774   61676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:17.754724   61676 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:17.754762   61676 kubeadm.go:640] restartCluster took 20.835238159s
	I0103 20:13:17.754774   61676 kubeadm.go:406] StartCluster complete in 20.886921594s
	I0103 20:13:17.754794   61676 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.754875   61676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:17.757638   61676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:17.759852   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:13:17.759948   61676 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:17.760022   61676 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-451331"
	I0103 20:13:17.760049   61676 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-451331"
	W0103 20:13:17.760060   61676 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:17.760105   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.760154   61676 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:17.760202   61676 addons.go:69] Setting default-storageclass=true in profile "embed-certs-451331"
	I0103 20:13:17.760227   61676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-451331"
	I0103 20:13:17.760525   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760553   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760595   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.760619   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.760814   61676 addons.go:69] Setting metrics-server=true in profile "embed-certs-451331"
	I0103 20:13:17.760869   61676 addons.go:237] Setting addon metrics-server=true in "embed-certs-451331"
	W0103 20:13:17.760887   61676 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:17.760949   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.761311   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.761367   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.778350   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0103 20:13:17.778603   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0103 20:13:17.778840   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.778947   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.779349   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779369   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779496   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.779506   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.779894   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.779936   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.780390   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0103 20:13:17.780507   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780528   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.780892   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.780933   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.781532   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.782012   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.782030   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.782393   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.782580   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.786209   61676 addons.go:237] Setting addon default-storageclass=true in "embed-certs-451331"
	W0103 20:13:17.786231   61676 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:17.786264   61676 host.go:66] Checking if "embed-certs-451331" exists ...
	I0103 20:13:17.786730   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.786761   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.796538   61676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-451331" context rescaled to 1 replicas
	I0103 20:13:17.796579   61676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:17.798616   61676 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:17.800702   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:17.799744   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0103 20:13:17.801004   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0103 20:13:17.801125   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.801622   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.801643   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.801967   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.802456   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.804195   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.804537   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.804683   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.804700   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.806577   61676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:17.805108   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.807681   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0103 20:13:17.808340   61676 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:17.808354   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:17.808371   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.808513   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.809005   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.809510   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.809529   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.809978   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.810778   61676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:17.810822   61676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:17.812250   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812607   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.812629   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.812892   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.812970   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.813069   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.815321   61676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:17.813342   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.817289   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:17.817308   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:17.817336   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.817473   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.820418   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.820892   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.820920   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.821168   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.821350   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.821468   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.821597   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:17.829857   61676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0103 20:13:17.830343   61676 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:17.830847   61676 main.go:141] libmachine: Using API Version  1
	I0103 20:13:17.830869   61676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:17.831278   61676 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:17.831432   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetState
	I0103 20:13:17.833351   61676 main.go:141] libmachine: (embed-certs-451331) Calling .DriverName
	I0103 20:13:17.833678   61676 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:17.833695   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:17.833714   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHHostname
	I0103 20:13:17.837454   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837708   61676 main.go:141] libmachine: (embed-certs-451331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:4a:19", ip: ""} in network mk-embed-certs-451331: {Iface:virbr4 ExpiryTime:2024-01-03 21:12:41 +0000 UTC Type:0 Mac:52:54:00:38:4a:19 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:embed-certs-451331 Clientid:01:52:54:00:38:4a:19}
	I0103 20:13:17.837730   61676 main.go:141] libmachine: (embed-certs-451331) DBG | domain embed-certs-451331 has defined IP address 192.168.50.197 and MAC address 52:54:00:38:4a:19 in network mk-embed-certs-451331
	I0103 20:13:17.837975   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHPort
	I0103 20:13:17.838211   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHKeyPath
	I0103 20:13:17.838384   61676 main.go:141] libmachine: (embed-certs-451331) Calling .GetSSHUsername
	I0103 20:13:17.838534   61676 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/embed-certs-451331/id_rsa Username:docker}
	I0103 20:13:18.036885   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:18.097340   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:18.099953   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:18.099982   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:18.242823   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:18.242847   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:18.309930   61676 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:18.309959   61676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:18.321992   61676 node_ready.go:35] waiting up to 6m0s for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:18.322077   61676 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:18.366727   61676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:16.441666   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.161911946s)
	I0103 20:13:16.441698   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0103 20:13:16.441720   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441740   62015 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.155838517s)
	I0103 20:13:16.441767   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0103 20:13:16.441855   62015 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0103 20:13:16.441964   62015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:20.073248   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.975867864s)
	I0103 20:13:20.073318   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073383   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073265   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.03634078s)
	I0103 20:13:20.073419   61676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.706641739s)
	I0103 20:13:20.073466   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073490   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073489   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073553   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073744   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073759   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073775   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073786   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.073878   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.073905   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.073935   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073938   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.073980   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.073992   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074016   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074036   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074073   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.074086   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.074309   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074369   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.074428   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074476   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074454   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.074506   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.074558   61676 addons.go:473] Verifying addon metrics-server=true in "embed-certs-451331"
	I0103 20:13:20.077560   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.077613   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.077653   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.088401   61676 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:20.088441   61676 main.go:141] libmachine: (embed-certs-451331) Calling .Close
	I0103 20:13:20.088845   61676 main.go:141] libmachine: (embed-certs-451331) DBG | Closing plugin on server side
	I0103 20:13:20.090413   61676 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:20.090439   61676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:20.092641   61676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0103 20:13:16.593786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594320   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:16.594352   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:16.594229   62835 retry.go:31] will retry after 1.232411416s: waiting for machine to come up
	I0103 20:13:17.828286   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832049   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:17.832078   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:17.828787   62835 retry.go:31] will retry after 2.020753248s: waiting for machine to come up
	I0103 20:13:19.851119   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:19.851683   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:19.851595   62835 retry.go:31] will retry after 2.720330873s: waiting for machine to come up
	I0103 20:13:20.094375   61676 addons.go:508] enable addons completed in 2.334425533s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0103 20:13:20.325950   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:22.327709   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:19.820972   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.379182556s)
	I0103 20:13:19.821009   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0103 20:13:19.821032   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.820976   62015 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.378974193s)
	I0103 20:13:19.821081   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0103 20:13:19.821092   62015 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0103 20:13:21.294764   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.47365805s)
	I0103 20:13:21.294796   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0103 20:13:21.294826   62015 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:21.294879   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0103 20:13:24.067996   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.773083678s)
	I0103 20:13:24.068031   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0103 20:13:24.068071   62015 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:24.068131   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0103 20:13:22.573532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573959   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:22.573984   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:22.573882   62835 retry.go:31] will retry after 2.869192362s: waiting for machine to come up
	I0103 20:13:25.444272   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444774   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | unable to find current IP address of domain default-k8s-diff-port-018788 in network mk-default-k8s-diff-port-018788
	I0103 20:13:25.444801   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | I0103 20:13:25.444710   62835 retry.go:31] will retry after 3.61848561s: waiting for machine to come up
	I0103 20:13:24.327795   61676 node_ready.go:58] node "embed-certs-451331" has status "Ready":"False"
	I0103 20:13:24.831015   61676 node_ready.go:49] node "embed-certs-451331" has status "Ready":"True"
	I0103 20:13:24.831037   61676 node_ready.go:38] duration metric: took 6.509012992s waiting for node "embed-certs-451331" to be "Ready" ...
	I0103 20:13:24.831046   61676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:24.838244   61676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345945   61676 pod_ready.go:92] pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.345980   61676 pod_ready.go:81] duration metric: took 507.709108ms waiting for pod "coredns-5dd5756b68-sx6gg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.345991   61676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352763   61676 pod_ready.go:92] pod "etcd-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.352798   61676 pod_ready.go:81] duration metric: took 6.794419ms waiting for pod "etcd-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.352812   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359491   61676 pod_ready.go:92] pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.359533   61676 pod_ready.go:81] duration metric: took 6.711829ms waiting for pod "kube-apiserver-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.359547   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867866   61676 pod_ready.go:92] pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:25.867898   61676 pod_ready.go:81] duration metric: took 508.341809ms waiting for pod "kube-controller-manager-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:25.867912   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026106   61676 pod_ready.go:92] pod "kube-proxy-fsnb9" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.026140   61676 pod_ready.go:81] duration metric: took 158.216243ms waiting for pod "kube-proxy-fsnb9" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.026153   61676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428480   61676 pod_ready.go:92] pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:26.428506   61676 pod_ready.go:81] duration metric: took 402.345241ms waiting for pod "kube-scheduler-embed-certs-451331" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:26.428525   61676 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:28.438138   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:27.768745   62015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.700590535s)
	I0103 20:13:27.768774   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0103 20:13:27.768797   62015 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:27.768833   62015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0103 20:13:28.718165   62015 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0103 20:13:28.718231   62015 cache_images.go:123] Successfully loaded all cached images
	I0103 20:13:28.718239   62015 cache_images.go:92] LoadImages completed in 17.301651166s
	I0103 20:13:28.718342   62015 ssh_runner.go:195] Run: crio config
	I0103 20:13:28.770786   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:28.770813   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:28.770838   62015 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:28.770862   62015 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.245 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-749210 NodeName:no-preload-749210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:28.771031   62015 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-749210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:28.771103   62015 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-749210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:13:28.771163   62015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 20:13:28.780756   62015 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:28.780834   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:28.789160   62015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0103 20:13:28.804638   62015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 20:13:28.820113   62015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0103 20:13:28.835707   62015 ssh_runner.go:195] Run: grep 192.168.61.245	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:28.839456   62015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:28.850530   62015 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210 for IP: 192.168.61.245
	I0103 20:13:28.850581   62015 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:28.850730   62015 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:28.850770   62015 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:28.850833   62015 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.key
	I0103 20:13:28.850886   62015 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key.5dd805e0
	I0103 20:13:28.850922   62015 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key
	I0103 20:13:28.851054   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:28.851081   62015 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:28.851093   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:28.851117   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:28.851139   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:28.851168   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:28.851210   62015 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:28.851832   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:28.874236   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:13:28.896624   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:28.919016   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:13:28.941159   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:28.963311   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:28.985568   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:29.007709   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:29.030188   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:29.052316   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:29.076761   62015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:29.101462   62015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:29.118605   62015 ssh_runner.go:195] Run: openssl version
	I0103 20:13:29.124144   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:29.133148   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137750   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.137809   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:29.143321   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:29.152302   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:29.161551   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166396   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.166457   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:29.173179   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:29.184167   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:29.194158   62015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198763   62015 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.198836   62015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:29.204516   62015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:29.214529   62015 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:29.218834   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:29.225036   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:29.231166   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:29.237200   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:29.243158   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:29.249694   62015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:29.255582   62015 kubeadm.go:404] StartCluster: {Name:no-preload-749210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-749210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:29.255672   62015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:29.255758   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:29.299249   62015 cri.go:89] found id: ""
	I0103 20:13:29.299346   62015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:29.311210   62015 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:29.311227   62015 kubeadm.go:636] restartCluster start
	I0103 20:13:29.311271   62015 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:29.320430   62015 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:29.321471   62015 kubeconfig.go:92] found "no-preload-749210" server: "https://192.168.61.245:8443"
	I0103 20:13:29.324643   62015 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:29.333237   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.333300   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.345156   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.219284   61400 start.go:369] acquired machines lock for "old-k8s-version-927922" in 54.622555379s
	I0103 20:13:30.219352   61400 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:13:30.219364   61400 fix.go:54] fixHost starting: 
	I0103 20:13:30.219739   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:30.219770   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:30.235529   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0103 20:13:30.235926   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:30.236537   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:13:30.236562   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:30.236911   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:30.237121   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:30.237293   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:13:30.238979   61400 fix.go:102] recreateIfNeeded on old-k8s-version-927922: state=Stopped err=<nil>
	I0103 20:13:30.239006   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	W0103 20:13:30.239155   61400 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:13:30.241210   61400 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-927922" ...
	I0103 20:13:29.067586   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068030   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Found IP for machine: 192.168.39.139
	I0103 20:13:29.068048   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserving static IP address...
	I0103 20:13:29.068090   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has current primary IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.068505   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.068532   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | skip adding static IP to network mk-default-k8s-diff-port-018788 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018788", mac: "52:54:00:df:c8:9f", ip: "192.168.39.139"}
	I0103 20:13:29.068549   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Reserved static IP address: 192.168.39.139
	I0103 20:13:29.068571   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Waiting for SSH to be available...
	I0103 20:13:29.068608   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Getting to WaitForSSH function...
	I0103 20:13:29.071139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071587   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.071620   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.071779   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH client type: external
	I0103 20:13:29.071810   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa (-rw-------)
	I0103 20:13:29.071858   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:29.071879   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | About to run SSH command:
	I0103 20:13:29.071896   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | exit 0
	I0103 20:13:29.166962   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:29.167365   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetConfigRaw
	I0103 20:13:29.167989   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.170671   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171052   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.171092   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.171325   62050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/config.json ...
	I0103 20:13:29.171564   62050 machine.go:88] provisioning docker machine ...
	I0103 20:13:29.171589   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.171866   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172058   62050 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018788"
	I0103 20:13:29.172084   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.172253   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.175261   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175626   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.175660   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.175749   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.175943   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176219   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.176392   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.176611   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.177083   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.177105   62050 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018788 && echo "default-k8s-diff-port-018788" | sudo tee /etc/hostname
	I0103 20:13:29.304876   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018788
	
	I0103 20:13:29.304915   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.307645   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308124   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.308190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.308389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.308619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308799   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.308997   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.309177   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.309652   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.309682   62050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018788/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:29.431479   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:29.431517   62050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:29.431555   62050 buildroot.go:174] setting up certificates
	I0103 20:13:29.431569   62050 provision.go:83] configureAuth start
	I0103 20:13:29.431582   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetMachineName
	I0103 20:13:29.431900   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:29.435012   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435482   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.435517   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.435638   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.437865   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438267   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.438303   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.438388   62050 provision.go:138] copyHostCerts
	I0103 20:13:29.438448   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:29.438461   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:29.438527   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:29.438625   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:29.438633   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:29.438653   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:29.438713   62050 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:29.438720   62050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:29.438738   62050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:29.438787   62050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018788 san=[192.168.39.139 192.168.39.139 localhost 127.0.0.1 minikube default-k8s-diff-port-018788]
	I0103 20:13:29.494476   62050 provision.go:172] copyRemoteCerts
	I0103 20:13:29.494562   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:29.494590   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.497330   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497597   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.497623   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.497786   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.497956   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.498139   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.498268   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:29.583531   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:29.605944   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0103 20:13:29.630747   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:13:29.656325   62050 provision.go:86] duration metric: configureAuth took 224.741883ms
	I0103 20:13:29.656355   62050 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:29.656525   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:13:29.656619   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.659656   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.660213   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.660434   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.660643   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.660864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.661019   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.661217   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:29.661571   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:29.661588   62050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:29.970938   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:29.970966   62050 machine.go:91] provisioned docker machine in 799.385733ms
	I0103 20:13:29.970975   62050 start.go:300] post-start starting for "default-k8s-diff-port-018788" (driver="kvm2")
	I0103 20:13:29.970985   62050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:29.971007   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:29.971387   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:29.971418   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:29.974114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974487   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:29.974562   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:29.974706   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:29.974894   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:29.975075   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:29.975227   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.061987   62050 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:30.066591   62050 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:30.066620   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:30.066704   62050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:30.066795   62050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:30.066899   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:30.076755   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:30.099740   62050 start.go:303] post-start completed in 128.750887ms
	I0103 20:13:30.099763   62050 fix.go:56] fixHost completed within 20.287967183s
	I0103 20:13:30.099782   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.102744   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.103177   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.103409   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.103633   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.103846   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.104080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.104308   62050 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:30.104680   62050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0103 20:13:30.104696   62050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:30.219120   62050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312810.161605674
	
	I0103 20:13:30.219145   62050 fix.go:206] guest clock: 1704312810.161605674
	I0103 20:13:30.219154   62050 fix.go:219] Guest: 2024-01-03 20:13:30.161605674 +0000 UTC Remote: 2024-01-03 20:13:30.099767061 +0000 UTC m=+264.645600185 (delta=61.838613ms)
	I0103 20:13:30.219191   62050 fix.go:190] guest clock delta is within tolerance: 61.838613ms
	I0103 20:13:30.219202   62050 start.go:83] releasing machines lock for "default-k8s-diff-port-018788", held for 20.407440359s
	I0103 20:13:30.219230   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.219551   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:30.222200   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222616   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.222650   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.222811   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223411   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223568   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:13:30.223643   62050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:30.223686   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.223940   62050 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:30.223970   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:13:30.226394   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226746   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.226777   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.226809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227080   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227274   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227389   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:30.227443   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227446   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:30.227567   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:13:30.227595   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.227739   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:13:30.227864   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:13:30.227972   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:13:30.315855   62050 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:30.359117   62050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:30.499200   62050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:30.505296   62050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:30.505768   62050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:30.520032   62050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:30.520059   62050 start.go:475] detecting cgroup driver to use...
	I0103 20:13:30.520146   62050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:30.532684   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:30.545152   62050 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:30.545222   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:30.558066   62050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:30.570999   62050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:30.682484   62050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:30.802094   62050 docker.go:219] disabling docker service ...
	I0103 20:13:30.802171   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:30.815796   62050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:30.827982   62050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:30.952442   62050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:31.068759   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:31.083264   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:31.102893   62050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:13:31.102979   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.112366   62050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:31.112433   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.122940   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.133385   62050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:31.144251   62050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:31.155210   62050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:31.164488   62050 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:31.164552   62050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:31.177632   62050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:31.186983   62050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:31.309264   62050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:31.493626   62050 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:31.493706   62050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:31.504103   62050 start.go:543] Will wait 60s for crictl version
	I0103 20:13:31.504187   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:13:31.507927   62050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:31.543967   62050 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:31.544046   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.590593   62050 ssh_runner.go:195] Run: crio --version
	I0103 20:13:31.639562   62050 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0103 20:13:30.242808   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Start
	I0103 20:13:30.242991   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring networks are active...
	I0103 20:13:30.243776   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network default is active
	I0103 20:13:30.244126   61400 main.go:141] libmachine: (old-k8s-version-927922) Ensuring network mk-old-k8s-version-927922 is active
	I0103 20:13:30.244504   61400 main.go:141] libmachine: (old-k8s-version-927922) Getting domain xml...
	I0103 20:13:30.245244   61400 main.go:141] libmachine: (old-k8s-version-927922) Creating domain...
	I0103 20:13:31.553239   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting to get IP...
	I0103 20:13:31.554409   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.554942   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.555022   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.554922   63030 retry.go:31] will retry after 192.654673ms: waiting for machine to come up
	I0103 20:13:31.749588   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:31.750035   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:31.750058   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:31.750000   63030 retry.go:31] will retry after 270.810728ms: waiting for machine to come up
	I0103 20:13:32.022736   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.023310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.023337   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.023280   63030 retry.go:31] will retry after 327.320898ms: waiting for machine to come up
	I0103 20:13:32.352845   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.353453   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.353501   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.353395   63030 retry.go:31] will retry after 575.525231ms: waiting for machine to come up
	I0103 20:13:32.930217   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:32.930833   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:32.930859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:32.930741   63030 retry.go:31] will retry after 571.986596ms: waiting for machine to come up
	I0103 20:13:30.936363   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:32.939164   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:29.833307   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:29.833374   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:29.844819   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.333870   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.333936   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.345802   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:30.833281   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:30.833400   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:30.848469   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.334071   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.334151   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.346445   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.833944   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:31.834034   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:31.848925   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.333349   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.333432   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.349173   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:32.833632   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:32.833696   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:32.848186   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.333659   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.333757   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.349560   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:33.834221   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:33.834309   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:33.846637   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:34.334219   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.334299   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.350703   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:31.641182   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetIP
	I0103 20:13:31.644371   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644677   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:13:31.644712   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:13:31.644971   62050 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:31.649106   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:31.662256   62050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:13:31.662380   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:31.701210   62050 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0103 20:13:31.701275   62050 ssh_runner.go:195] Run: which lz4
	I0103 20:13:31.704890   62050 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:31.708756   62050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:31.708783   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0103 20:13:33.543202   62050 crio.go:444] Took 1.838336 seconds to copy over tarball
	I0103 20:13:33.543282   62050 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:33.504797   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:33.505336   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:33.505363   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:33.505286   63030 retry.go:31] will retry after 593.865088ms: waiting for machine to come up
	I0103 20:13:34.101055   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:34.101559   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:34.101593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:34.101507   63030 retry.go:31] will retry after 1.016460442s: waiting for machine to come up
	I0103 20:13:35.119877   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:35.120383   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:35.120415   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:35.120352   63030 retry.go:31] will retry after 1.462823241s: waiting for machine to come up
	I0103 20:13:36.585467   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:36.585968   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:36.585993   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:36.585932   63030 retry.go:31] will retry after 1.213807131s: waiting for machine to come up
	I0103 20:13:37.801504   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:37.801970   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:37.801999   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:37.801896   63030 retry.go:31] will retry after 1.961227471s: waiting for machine to come up
	I0103 20:13:35.435661   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:37.435870   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:34.834090   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:34.834160   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:34.848657   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.333723   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.333809   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.348582   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:35.834128   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:35.834208   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:35.845911   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.333385   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.333512   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.346391   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:36.833978   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:36.834054   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:36.847134   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.333698   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.333785   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.346411   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.834024   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.834141   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.846961   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.333461   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.333665   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.346713   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.834378   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.834470   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.848473   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.333266   62015 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.333347   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.345638   62015 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.345664   62015 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:39.345692   62015 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:39.345721   62015 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:39.345792   62015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:39.387671   62015 cri.go:89] found id: ""
	I0103 20:13:39.387778   62015 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:39.403523   62015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:39.413114   62015 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:39.413188   62015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421503   62015 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:39.421527   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:39.561406   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:36.473303   62050 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.929985215s)
	I0103 20:13:36.473337   62050 crio.go:451] Took 2.930104 seconds to extract the tarball
	I0103 20:13:36.473350   62050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:36.513202   62050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:36.557201   62050 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:13:36.557231   62050 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:13:36.557314   62050 ssh_runner.go:195] Run: crio config
	I0103 20:13:36.618916   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:36.618948   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:36.618982   62050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:13:36.619007   62050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.139 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018788 NodeName:default-k8s-diff-port-018788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:13:36.619167   62050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.139
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:13:36.619242   62050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-018788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0103 20:13:36.619294   62050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:13:36.628488   62050 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:13:36.628571   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:13:36.637479   62050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0103 20:13:36.652608   62050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:13:36.667432   62050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0103 20:13:36.683138   62050 ssh_runner.go:195] Run: grep 192.168.39.139	control-plane.minikube.internal$ /etc/hosts
	I0103 20:13:36.687022   62050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:36.698713   62050 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788 for IP: 192.168.39.139
	I0103 20:13:36.698755   62050 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:36.698948   62050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:13:36.699009   62050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:13:36.699098   62050 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.key
	I0103 20:13:36.699157   62050 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key.7716debd
	I0103 20:13:36.699196   62050 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key
	I0103 20:13:36.699287   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:13:36.699314   62050 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:13:36.699324   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:13:36.699349   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:13:36.699370   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:13:36.699395   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:13:36.699434   62050 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:36.700045   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:13:36.721872   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:13:36.744733   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:13:36.772245   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 20:13:36.796690   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:13:36.819792   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:13:36.843109   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:13:36.866679   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:13:36.889181   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:13:36.912082   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:13:36.935621   62050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:13:36.959090   62050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:13:36.974873   62050 ssh_runner.go:195] Run: openssl version
	I0103 20:13:36.980449   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:13:36.990278   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995822   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:13:36.995903   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:13:37.001504   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:13:37.011628   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:13:37.021373   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025697   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.025752   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:13:37.031286   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:13:37.041075   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:13:37.050789   62050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055584   62050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.055647   62050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:13:37.061079   62050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:13:37.070792   62050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:13:37.075050   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:13:37.081170   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:13:37.087372   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:13:37.093361   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:13:37.099203   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:13:37.104932   62050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:13:37.110783   62050 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-018788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-018788 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:13:37.110955   62050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:13:37.111003   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:37.146687   62050 cri.go:89] found id: ""
	I0103 20:13:37.146766   62050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:13:37.156789   62050 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:13:37.156808   62050 kubeadm.go:636] restartCluster start
	I0103 20:13:37.156882   62050 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:13:37.166168   62050 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.167346   62050 kubeconfig.go:92] found "default-k8s-diff-port-018788" server: "https://192.168.39.139:8444"
	I0103 20:13:37.169750   62050 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:13:37.178965   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.179035   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.190638   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:37.679072   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:37.679142   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:37.691149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.179709   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.179804   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.191656   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:38.679825   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:38.679912   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:38.693380   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.179927   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.180042   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.193368   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.679947   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:39.680049   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:39.692444   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:40.179510   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.179600   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.192218   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:39.764226   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:39.764651   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:39.764681   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:39.764592   63030 retry.go:31] will retry after 2.38598238s: waiting for machine to come up
	I0103 20:13:42.151992   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:42.152486   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:42.152517   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:42.152435   63030 retry.go:31] will retry after 3.320569317s: waiting for machine to come up
	I0103 20:13:39.438887   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:41.441552   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:40.707462   62015 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.146014282s)
	I0103 20:13:40.707501   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:40.913812   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.008294   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:41.093842   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:41.093931   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:41.594484   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.094333   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:42.594647   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.094744   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.594323   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:43.628624   62015 api_server.go:72] duration metric: took 2.534781213s to wait for apiserver process to appear ...
	I0103 20:13:43.628653   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:43.628674   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:40.679867   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:40.679959   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:40.692707   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.179865   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.179962   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.192901   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:41.679604   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:41.679668   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:41.691755   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.179959   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.180082   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.193149   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:42.679682   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:42.679808   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:42.696777   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.179236   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.179343   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.195021   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:43.679230   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:43.679339   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:43.696886   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.179488   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.179558   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.194865   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:44.679087   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:44.679216   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:44.693383   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.179505   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.179607   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.190496   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:45.474145   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:45.474596   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | unable to find current IP address of domain old-k8s-version-927922 in network mk-old-k8s-version-927922
	I0103 20:13:45.474623   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | I0103 20:13:45.474542   63030 retry.go:31] will retry after 3.652901762s: waiting for machine to come up
	I0103 20:13:43.937146   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:45.938328   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.941499   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:47.277935   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:47.277971   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:47.277988   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.543418   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.543449   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:47.629720   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:47.635340   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:47.635373   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.128849   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.135534   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:48.135576   62015 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:48.628977   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:13:48.634609   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:13:48.643475   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:13:48.643505   62015 api_server.go:131] duration metric: took 5.01484434s to wait for apiserver health ...
	I0103 20:13:48.643517   62015 cni.go:84] Creating CNI manager for ""
	I0103 20:13:48.643526   62015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:48.645945   62015 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:48.647556   62015 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:48.671093   62015 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:48.698710   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:48.712654   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:48.712704   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:48.712717   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:48.712729   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:48.712739   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:48.712761   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:48.712771   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:48.712780   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:48.712793   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:48.712806   62015 system_pods.go:74] duration metric: took 14.071881ms to wait for pod list to return data ...
	I0103 20:13:48.712818   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:48.716271   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:48.716301   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:48.716326   62015 node_conditions.go:105] duration metric: took 3.496257ms to run NodePressure ...
	I0103 20:13:48.716348   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:49.020956   62015 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:49.025982   62015 kubeadm.go:787] kubelet initialised
	I0103 20:13:49.026003   62015 kubeadm.go:788] duration metric: took 5.022549ms waiting for restarted kubelet to initialise ...
	I0103 20:13:49.026010   62015 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:49.033471   62015 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.038777   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038806   62015 pod_ready.go:81] duration metric: took 5.286579ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.038823   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "coredns-76f75df574-rbx58" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.038834   62015 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.044324   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044349   62015 pod_ready.go:81] duration metric: took 5.506628ms waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.044357   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "etcd-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.044363   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.049022   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049058   62015 pod_ready.go:81] duration metric: took 4.681942ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.049068   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-apiserver-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.049073   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.102378   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102407   62015 pod_ready.go:81] duration metric: took 53.323019ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.102415   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.102424   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.504820   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504852   62015 pod_ready.go:81] duration metric: took 402.417876ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.504865   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-proxy-5hwf4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.504875   62015 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:49.905230   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905265   62015 pod_ready.go:81] duration metric: took 400.380902ms waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:49.905278   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "kube-scheduler-no-preload-749210" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:49.905287   62015 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:50.304848   62015 pod_ready.go:97] node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304883   62015 pod_ready.go:81] duration metric: took 399.567527ms waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:50.304896   62015 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-749210" hosting pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.304905   62015 pod_ready.go:38] duration metric: took 1.278887327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:50.304926   62015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:13:50.331405   62015 ops.go:34] apiserver oom_adj: -16
	I0103 20:13:50.331428   62015 kubeadm.go:640] restartCluster took 21.020194358s
	I0103 20:13:50.331439   62015 kubeadm.go:406] StartCluster complete in 21.075864121s
	I0103 20:13:50.331459   62015 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.331541   62015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:13:50.333553   62015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:13:50.333969   62015 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:13:50.334045   62015 addons.go:69] Setting storage-provisioner=true in profile "no-preload-749210"
	I0103 20:13:50.334064   62015 addons.go:237] Setting addon storage-provisioner=true in "no-preload-749210"
	W0103 20:13:50.334072   62015 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:13:50.334082   62015 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:13:50.334121   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.334129   62015 addons.go:69] Setting default-storageclass=true in profile "no-preload-749210"
	I0103 20:13:50.334143   62015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-749210"
	I0103 20:13:50.334556   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334588   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.334602   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334620   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.334681   62015 addons.go:69] Setting metrics-server=true in profile "no-preload-749210"
	I0103 20:13:50.334708   62015 addons.go:237] Setting addon metrics-server=true in "no-preload-749210"
	I0103 20:13:50.334712   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	W0103 20:13:50.334717   62015 addons.go:246] addon metrics-server should already be in state true
	I0103 20:13:50.334756   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.335152   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.335190   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.343173   62015 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-749210" context rescaled to 1 replicas
	I0103 20:13:50.343213   62015 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:13:50.345396   62015 out.go:177] * Verifying Kubernetes components...
	I0103 20:13:50.347721   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:13:50.353122   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I0103 20:13:50.353250   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0103 20:13:50.353274   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0103 20:13:50.353737   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.353896   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354283   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354299   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354488   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.354491   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.354588   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.354889   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355115   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.355165   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.355181   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.355244   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.355746   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.356199   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356239   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.356792   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.356830   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.359095   62015 addons.go:237] Setting addon default-storageclass=true in "no-preload-749210"
	W0103 20:13:50.359114   62015 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:13:50.359139   62015 host.go:66] Checking if "no-preload-749210" exists ...
	I0103 20:13:50.359554   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.359595   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.377094   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0103 20:13:50.377218   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0103 20:13:50.377679   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.377779   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.378353   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378376   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378472   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.378488   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.378816   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.378874   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.379033   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.381013   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.381240   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.389265   62015 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:50.383848   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I0103 20:13:50.391000   62015 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.391023   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:13:50.391049   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391062   62015 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:13:45.679265   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:45.679374   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:45.690232   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.179862   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.179963   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.190942   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:46.679624   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:46.679738   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:46.691578   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.179185   62050 api_server.go:166] Checking apiserver status ...
	I0103 20:13:47.179280   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:13:47.193995   62050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:47.194029   62050 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:13:47.194050   62050 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:13:47.194061   62050 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:13:47.194114   62050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:13:47.235512   62050 cri.go:89] found id: ""
	I0103 20:13:47.235625   62050 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:13:47.251115   62050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:13:47.261566   62050 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:13:47.261631   62050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271217   62050 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:13:47.271244   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:47.408550   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.262356   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.492357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.597607   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:48.699097   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:13:48.699194   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.199349   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:49.699758   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.199818   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:50.392557   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:13:50.392577   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:13:50.392597   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.391469   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.393835   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.393854   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.394340   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.394967   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395384   62015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:13:50.395419   62015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:13:50.395602   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.395663   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.395683   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.395811   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.395981   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.396173   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.398544   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399117   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.399142   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.399363   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.399582   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.399692   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.399761   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.434719   62015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I0103 20:13:50.435279   62015 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:13:50.435938   62015 main.go:141] libmachine: Using API Version  1
	I0103 20:13:50.435972   62015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:13:50.436407   62015 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:13:50.436630   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetState
	I0103 20:13:50.438992   62015 main.go:141] libmachine: (no-preload-749210) Calling .DriverName
	I0103 20:13:50.442816   62015 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.442835   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:13:50.442856   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHHostname
	I0103 20:13:50.450157   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.451549   62015 main.go:141] libmachine: (no-preload-749210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:87:c7", ip: ""} in network mk-no-preload-749210: {Iface:virbr2 ExpiryTime:2024-01-03 21:13:02 +0000 UTC Type:0 Mac:52:54:00:fb:87:c7 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:no-preload-749210 Clientid:01:52:54:00:fb:87:c7}
	I0103 20:13:50.451575   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHPort
	I0103 20:13:50.451571   62015 main.go:141] libmachine: (no-preload-749210) DBG | domain no-preload-749210 has defined IP address 192.168.61.245 and MAC address 52:54:00:fb:87:c7 in network mk-no-preload-749210
	I0103 20:13:50.453023   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHKeyPath
	I0103 20:13:50.453577   62015 main.go:141] libmachine: (no-preload-749210) Calling .GetSSHUsername
	I0103 20:13:50.453753   62015 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/no-preload-749210/id_rsa Username:docker}
	I0103 20:13:50.556135   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:13:50.556161   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:13:50.583620   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:13:50.583643   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:13:50.589708   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:13:50.614203   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:13:50.631936   62015 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.631961   62015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:13:50.708658   62015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:13:50.772364   62015 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:13:50.772434   62015 node_ready.go:35] waiting up to 6m0s for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:51.785361   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195620446s)
	I0103 20:13:51.785407   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785421   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785427   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171187695s)
	I0103 20:13:51.785463   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785488   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785603   62015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076908391s)
	I0103 20:13:51.785687   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785717   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.785730   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.785739   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785741   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.785748   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.785819   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.785837   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786108   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786143   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786152   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786166   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.786178   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.786444   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.786495   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.786536   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.786553   62015 addons.go:473] Verifying addon metrics-server=true in "no-preload-749210"
	I0103 20:13:51.787346   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787365   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787376   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.787386   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.787596   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787638   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787652   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.787855   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.787859   62015 main.go:141] libmachine: (no-preload-749210) DBG | Closing plugin on server side
	I0103 20:13:51.787871   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.797560   62015 main.go:141] libmachine: Making call to close driver server
	I0103 20:13:51.797584   62015 main.go:141] libmachine: (no-preload-749210) Calling .Close
	I0103 20:13:51.797860   62015 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:13:51.797874   62015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:13:51.800087   62015 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0103 20:13:49.131462   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132013   61400 main.go:141] libmachine: (old-k8s-version-927922) Found IP for machine: 192.168.72.12
	I0103 20:13:49.132041   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserving static IP address...
	I0103 20:13:49.132059   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has current primary IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.132507   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.132543   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | skip adding static IP to network mk-old-k8s-version-927922 - found existing host DHCP lease matching {name: "old-k8s-version-927922", mac: "52:54:00:61:79:06", ip: "192.168.72.12"}
	I0103 20:13:49.132560   61400 main.go:141] libmachine: (old-k8s-version-927922) Reserved static IP address: 192.168.72.12
	I0103 20:13:49.132582   61400 main.go:141] libmachine: (old-k8s-version-927922) Waiting for SSH to be available...
	I0103 20:13:49.132597   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Getting to WaitForSSH function...
	I0103 20:13:49.135129   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135499   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.135536   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.135703   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH client type: external
	I0103 20:13:49.135728   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa (-rw-------)
	I0103 20:13:49.135765   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:13:49.135780   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | About to run SSH command:
	I0103 20:13:49.135796   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | exit 0
	I0103 20:13:49.226568   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | SSH cmd err, output: <nil>: 
	I0103 20:13:49.226890   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetConfigRaw
	I0103 20:13:49.227536   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.230668   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231038   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.231064   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.231277   61400 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/config.json ...
	I0103 20:13:49.231456   61400 machine.go:88] provisioning docker machine ...
	I0103 20:13:49.231473   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:49.231708   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.231862   61400 buildroot.go:166] provisioning hostname "old-k8s-version-927922"
	I0103 20:13:49.231885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.232002   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.234637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235012   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.235048   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.235196   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.235338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235445   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.235543   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.235748   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.236196   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.236226   61400 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-927922 && echo "old-k8s-version-927922" | sudo tee /etc/hostname
	I0103 20:13:49.377588   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-927922
	
	I0103 20:13:49.377625   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.381244   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381634   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.381680   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.381885   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.382115   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382311   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.382538   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.382721   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:49.383096   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:49.383125   61400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-927922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-927922/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-927922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:13:49.517214   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:13:49.517246   61400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:13:49.517268   61400 buildroot.go:174] setting up certificates
	I0103 20:13:49.517280   61400 provision.go:83] configureAuth start
	I0103 20:13:49.517299   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetMachineName
	I0103 20:13:49.517606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:49.520819   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521255   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.521284   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.521442   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.523926   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524310   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.524364   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.524495   61400 provision.go:138] copyHostCerts
	I0103 20:13:49.524604   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:13:49.524618   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:13:49.524714   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:13:49.524842   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:13:49.524855   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:13:49.524885   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:13:49.524982   61400 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:13:49.525020   61400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:13:49.525063   61400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:13:49.525143   61400 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-927922 san=[192.168.72.12 192.168.72.12 localhost 127.0.0.1 minikube old-k8s-version-927922]
	I0103 20:13:49.896621   61400 provision.go:172] copyRemoteCerts
	I0103 20:13:49.896687   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:13:49.896728   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:49.899859   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900239   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:49.900274   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:49.900456   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:49.900690   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:49.900873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:49.901064   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:49.993569   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:13:50.017597   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:13:50.041139   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:13:50.064499   61400 provision.go:86] duration metric: configureAuth took 547.178498ms
	I0103 20:13:50.064533   61400 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:13:50.064770   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:13:50.064848   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.068198   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068637   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.068672   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.068873   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.069080   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069284   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.069457   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.069640   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.070115   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.070146   61400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:13:50.450845   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:13:50.450873   61400 machine.go:91] provisioned docker machine in 1.219404511s
	I0103 20:13:50.450886   61400 start.go:300] post-start starting for "old-k8s-version-927922" (driver="kvm2")
	I0103 20:13:50.450899   61400 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:13:50.450924   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.451263   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:13:50.451328   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.455003   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455413   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.455436   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.455644   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.455796   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.455919   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.456031   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.563846   61400 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:13:50.569506   61400 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:13:50.569532   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:13:50.569626   61400 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:13:50.569726   61400 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:13:50.569857   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:13:50.581218   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:13:50.612328   61400 start.go:303] post-start completed in 161.425373ms
	I0103 20:13:50.612359   61400 fix.go:56] fixHost completed within 20.392994827s
	I0103 20:13:50.612383   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.615776   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616241   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.616268   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.616368   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.616655   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.616849   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.617088   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.617286   61400 main.go:141] libmachine: Using SSH client type: native
	I0103 20:13:50.617764   61400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.12 22 <nil> <nil>}
	I0103 20:13:50.617791   61400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:13:50.740437   61400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704312830.691065491
	
	I0103 20:13:50.740506   61400 fix.go:206] guest clock: 1704312830.691065491
	I0103 20:13:50.740528   61400 fix.go:219] Guest: 2024-01-03 20:13:50.691065491 +0000 UTC Remote: 2024-01-03 20:13:50.612363446 +0000 UTC m=+357.606588552 (delta=78.702045ms)
	I0103 20:13:50.740563   61400 fix.go:190] guest clock delta is within tolerance: 78.702045ms
	I0103 20:13:50.740574   61400 start.go:83] releasing machines lock for "old-k8s-version-927922", held for 20.521248173s
	I0103 20:13:50.740606   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.740879   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:50.743952   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744357   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.744397   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.744668   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.745932   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:13:50.746302   61400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:13:50.746343   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.746759   61400 ssh_runner.go:195] Run: cat /version.json
	I0103 20:13:50.746784   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:13:50.749593   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.749994   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.750029   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.750496   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.750738   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.750900   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.751141   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.751696   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:13:50.751779   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.751842   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:13:50.751898   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:50.751960   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:13:50.752031   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:50.752063   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:13:50.841084   61400 ssh_runner.go:195] Run: systemctl --version
	I0103 20:13:50.882564   61400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:13:51.041188   61400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:13:51.049023   61400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:13:51.049103   61400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:13:51.068267   61400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:13:51.068297   61400 start.go:475] detecting cgroup driver to use...
	I0103 20:13:51.068371   61400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:13:51.086266   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:13:51.101962   61400 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:13:51.102030   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:13:51.118269   61400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:13:51.134642   61400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:13:51.310207   61400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:13:51.495609   61400 docker.go:219] disabling docker service ...
	I0103 20:13:51.495743   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:13:51.512101   61400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:13:51.527244   61400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:13:51.696874   61400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:13:51.836885   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:13:51.849905   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:13:51.867827   61400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0103 20:13:51.867895   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.877598   61400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:13:51.877713   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.886744   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.898196   61400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:13:51.910021   61400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:13:51.921882   61400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:13:51.930668   61400 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:13:51.930727   61400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:13:51.943294   61400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:13:51.952273   61400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:13:52.065108   61400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:13:52.272042   61400 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:13:52.272143   61400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:13:52.277268   61400 start.go:543] Will wait 60s for crictl version
	I0103 20:13:52.277436   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:52.281294   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:13:52.334056   61400 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:13:52.334231   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.390900   61400 ssh_runner.go:195] Run: crio --version
	I0103 20:13:52.454400   61400 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0103 20:13:52.455682   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetIP
	I0103 20:13:52.459194   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.459656   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:13:52.459683   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:13:52.460250   61400 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0103 20:13:52.465579   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:13:52.480500   61400 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 20:13:52.480620   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:52.532378   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:52.532450   61400 ssh_runner.go:195] Run: which lz4
	I0103 20:13:52.537132   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:13:52.541880   61400 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:13:52.541912   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0103 20:13:50.443235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:52.942235   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:51.801673   62015 addons.go:508] enable addons completed in 1.467711333s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0103 20:13:52.779944   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:50.699945   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.199773   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:13:51.227739   62050 api_server.go:72] duration metric: took 2.52863821s to wait for apiserver process to appear ...
	I0103 20:13:51.227768   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:13:51.227789   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:51.228288   62050 api_server.go:269] stopped: https://192.168.39.139:8444/healthz: Get "https://192.168.39.139:8444/healthz": dial tcp 192.168.39.139:8444: connect: connection refused
	I0103 20:13:51.728906   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.679221   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.679255   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.679273   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.722466   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:13:55.722528   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:13:55.728699   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:55.771739   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:55.771841   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.228041   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.234578   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.234618   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:56.728122   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:56.734464   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:13:56.734505   62050 api_server.go:103] status: https://192.168.39.139:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:13:57.228124   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:13:57.239527   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:13:57.253416   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:13:57.253445   62050 api_server.go:131] duration metric: took 6.025669125s to wait for apiserver health ...
	I0103 20:13:57.253456   62050 cni.go:84] Creating CNI manager for ""
	I0103 20:13:57.253464   62050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:13:57.255608   62050 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:13:54.091654   61400 crio.go:444] Took 1.554550 seconds to copy over tarball
	I0103 20:13:54.091734   61400 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:13:57.252728   61400 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.160960283s)
	I0103 20:13:57.252762   61400 crio.go:451] Took 3.161068 seconds to extract the tarball
	I0103 20:13:57.252773   61400 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:13:57.307431   61400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:13:57.362170   61400 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0103 20:13:57.362199   61400 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:13:57.362266   61400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.362306   61400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.362491   61400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.362505   61400 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0103 20:13:57.362630   61400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.362663   61400 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.362749   61400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.362830   61400 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364964   61400 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.364981   61400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.364999   61400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.365049   61400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.365081   61400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:57.365159   61400 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0103 20:13:57.365337   61400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.365364   61400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.585886   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.611291   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0103 20:13:57.622467   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.623443   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.627321   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.630211   61400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0103 20:13:57.630253   61400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.630299   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.647358   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.670079   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.724516   61400 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0103 20:13:57.724560   61400 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0103 20:13:57.724606   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.747338   61400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0103 20:13:57.747387   61400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.747451   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767682   61400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0103 20:13:57.767741   61400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.767749   61400 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0103 20:13:57.767772   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 20:13:57.767782   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.767778   61400 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.767834   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811841   61400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0103 20:13:57.811895   61400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.811861   61400 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0103 20:13:57.811948   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.811984   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0103 20:13:57.811948   61400 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.812053   61400 ssh_runner.go:195] Run: which crictl
	I0103 20:13:57.812098   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0103 20:13:57.812128   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0103 20:13:57.849648   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0103 20:13:57.849722   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0103 20:13:57.916421   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0103 20:13:57.916483   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0103 20:13:57.916529   61400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0103 20:13:57.936449   61400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0103 20:13:57.936474   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0103 20:13:57.936485   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0103 20:13:57.936538   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0103 20:13:55.436957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:57.441634   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:13:55.278078   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:57.280673   62015 node_ready.go:58] node "no-preload-749210" has status "Ready":"False"
	I0103 20:13:58.185787   62015 node_ready.go:49] node "no-preload-749210" has status "Ready":"True"
	I0103 20:13:58.185819   62015 node_ready.go:38] duration metric: took 7.413368774s waiting for node "no-preload-749210" to be "Ready" ...
	I0103 20:13:58.185837   62015 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:58.196599   62015 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203024   62015 pod_ready.go:92] pod "coredns-76f75df574-rbx58" in "kube-system" namespace has status "Ready":"True"
	I0103 20:13:58.203047   62015 pod_ready.go:81] duration metric: took 6.423108ms waiting for pod "coredns-76f75df574-rbx58" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.203057   62015 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:57.257123   62050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:13:57.293641   62050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:13:57.341721   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:13:57.360995   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:13:57.361054   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:13:57.361065   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:13:57.361109   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:13:57.361132   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 20:13:57.361147   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:13:57.361171   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:13:57.361189   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:13:57.361198   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 20:13:57.361207   62050 system_pods.go:74] duration metric: took 19.402129ms to wait for pod list to return data ...
	I0103 20:13:57.361218   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:13:57.369396   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:13:57.369435   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:13:57.369449   62050 node_conditions.go:105] duration metric: took 8.224276ms to run NodePressure ...
	I0103 20:13:57.369470   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:13:57.615954   62050 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624280   62050 kubeadm.go:787] kubelet initialised
	I0103 20:13:57.624312   62050 kubeadm.go:788] duration metric: took 8.328431ms waiting for restarted kubelet to initialise ...
	I0103 20:13:57.624321   62050 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:13:57.637920   62050 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:58.734401   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734439   62050 pod_ready.go:81] duration metric: took 1.096478242s waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:58.734454   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:58.734463   62050 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:13:59.605120   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605156   62050 pod_ready.go:81] duration metric: took 870.676494ms waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:13:59.605168   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:13:59.605174   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.176543   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176583   62050 pod_ready.go:81] duration metric: took 571.400586ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.176599   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.176608   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.201556   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201620   62050 pod_ready.go:81] duration metric: took 24.987825ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.201637   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.201647   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.233069   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233108   62050 pod_ready.go:81] duration metric: took 31.451633ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.233127   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-proxy-wqjlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.233135   62050 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.253505   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253534   62050 pod_ready.go:81] duration metric: took 20.386039ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.253550   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.253559   62050 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.272626   62050 pod_ready.go:97] node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272661   62050 pod_ready.go:81] duration metric: took 19.09311ms waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:00.272677   62050 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018788" hosting pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:00.272687   62050 pod_ready.go:38] duration metric: took 2.64835186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:00.272705   62050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:00.321126   62050 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:00.321189   62050 kubeadm.go:640] restartCluster took 23.164374098s
	I0103 20:14:00.321205   62050 kubeadm.go:406] StartCluster complete in 23.210428007s
	I0103 20:14:00.321226   62050 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.321322   62050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:00.323470   62050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.323925   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:00.324242   62050 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:14:00.324381   62050 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:00.324467   62050 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.324487   62050 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.324495   62050 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:00.324536   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.324984   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325013   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325285   62050 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325304   62050 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018788"
	I0103 20:14:00.325329   62050 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.325337   62050 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:00.325376   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.325309   62050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018788"
	I0103 20:14:00.325722   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.325740   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.325935   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.326021   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.347496   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0103 20:14:00.347895   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.348392   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.348415   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.348728   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.349192   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.349228   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.349916   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0103 20:14:00.350369   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.351043   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.351067   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.351579   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.352288   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.352392   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.358540   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0103 20:14:00.359079   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.359582   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.359607   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.359939   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.360114   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.364583   62050 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-018788"
	W0103 20:14:00.364614   62050 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:00.364645   62050 host.go:66] Checking if "default-k8s-diff-port-018788" exists ...
	I0103 20:14:00.365032   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.365080   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.365268   62050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-018788" context rescaled to 1 replicas
	I0103 20:14:00.365315   62050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.139 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:00.367628   62050 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:00.376061   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:00.382421   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0103 20:14:00.382601   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0103 20:14:00.382708   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0103 20:14:00.383285   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383310   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383837   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.383855   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.383862   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.384200   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384674   62050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:00.384701   62050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:00.384740   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.384914   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.386513   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.387010   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.387325   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.387343   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.389302   62050 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:00.390931   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:00.390945   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:00.390960   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.390651   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.392318   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.394641   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395185   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.395212   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.395483   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.395954   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.398448   62050 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:00.400431   62050 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.400454   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:00.400476   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.404480   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405112   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.405145   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.405765   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.405971   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.407610   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.407808   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.410796   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.410964   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.411436   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.417626   62050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0103 20:14:00.418201   62050 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:00.422710   62050 main.go:141] libmachine: Using API Version  1
	I0103 20:14:00.422743   62050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:00.423232   62050 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:00.423421   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetState
	I0103 20:14:00.425364   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .DriverName
	I0103 20:14:00.425678   62050 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.425697   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:00.425717   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHHostname
	I0103 20:14:00.429190   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429720   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c8:9f", ip: ""} in network mk-default-k8s-diff-port-018788: {Iface:virbr1 ExpiryTime:2024-01-03 21:13:22 +0000 UTC Type:0 Mac:52:54:00:df:c8:9f Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:default-k8s-diff-port-018788 Clientid:01:52:54:00:df:c8:9f}
	I0103 20:14:00.429745   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | domain default-k8s-diff-port-018788 has defined IP address 192.168.39.139 and MAC address 52:54:00:df:c8:9f in network mk-default-k8s-diff-port-018788
	I0103 20:14:00.429898   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHPort
	I0103 20:14:00.430599   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHKeyPath
	I0103 20:14:00.430803   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .GetSSHUsername
	I0103 20:14:00.430946   62050 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/default-k8s-diff-port-018788/id_rsa Username:docker}
	I0103 20:14:00.621274   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:00.621356   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:00.641979   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:00.681414   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:00.682076   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:00.682118   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:00.760063   62050 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.760095   62050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:00.833648   62050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:00.840025   62050 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:00.840147   62050 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.78156374s)
	I0103 20:14:02.423631   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423646   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423584   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.742133551s)
	I0103 20:14:02.423765   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423784   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.423889   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.423906   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.423920   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.423930   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424042   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424061   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424078   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.424076   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.424104   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.424125   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424137   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424472   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.424489   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.424502   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.431339   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.431368   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.431754   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.431789   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.431809   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) DBG | Closing plugin on server side
	I0103 20:14:02.575829   62050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742131608s)
	I0103 20:14:02.575880   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.575899   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576351   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576374   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576391   62050 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:02.576400   62050 main.go:141] libmachine: (default-k8s-diff-port-018788) Calling .Close
	I0103 20:14:02.576619   62050 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:02.576632   62050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:02.576641   62050 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-018788"
	I0103 20:14:02.578918   62050 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:13:58.180342   61400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0103 20:13:58.180407   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0103 20:13:58.180464   61400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0103 20:13:58.194447   61400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:13:58.726157   61400 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0103 20:13:58.726232   61400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0103 20:14:00.187852   61400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.461700942s)
	I0103 20:14:00.187973   61400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461718478s)
	I0103 20:14:00.188007   61400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0103 20:14:00.188104   61400 cache_images.go:92] LoadImages completed in 2.825887616s
	W0103 20:14:00.188202   61400 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17885-9609/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0103 20:14:00.188285   61400 ssh_runner.go:195] Run: crio config
	I0103 20:14:00.270343   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:00.270372   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:00.270393   61400 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:14:00.270416   61400 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.12 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-927922 NodeName:old-k8s-version-927922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 20:14:00.270624   61400 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-927922"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-927922
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.12:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:14:00.270765   61400 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-927922 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:14:00.270842   61400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0103 20:14:00.282011   61400 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:14:00.282093   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:14:00.292954   61400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0103 20:14:00.314616   61400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:14:00.366449   61400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0103 20:14:00.406579   61400 ssh_runner.go:195] Run: grep 192.168.72.12	control-plane.minikube.internal$ /etc/hosts
	I0103 20:14:00.410923   61400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:14:00.430315   61400 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922 for IP: 192.168.72.12
	I0103 20:14:00.430352   61400 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:00.430553   61400 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:14:00.430619   61400 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:14:00.430718   61400 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.key
	I0103 20:14:00.430798   61400 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key.9a91cab3
	I0103 20:14:00.430854   61400 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key
	I0103 20:14:00.431018   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:14:00.431071   61400 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:14:00.431083   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:14:00.431123   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:14:00.431158   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:14:00.431195   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:14:00.431250   61400 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:14:00.432123   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:14:00.472877   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:14:00.505153   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:14:00.533850   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:14:00.564548   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:14:00.596464   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:14:00.626607   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:14:00.655330   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:14:00.681817   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:14:00.711039   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:14:00.742406   61400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:14:00.768583   61400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:14:00.786833   61400 ssh_runner.go:195] Run: openssl version
	I0103 20:14:00.793561   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:14:00.807558   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812755   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.812816   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:14:00.820657   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:14:00.832954   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:14:00.844707   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850334   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.850425   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:14:00.856592   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:14:00.868105   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:14:00.881551   61400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886462   61400 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.886550   61400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:14:00.892487   61400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:14:00.904363   61400 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:14:00.909429   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:14:00.915940   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:14:00.922496   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:14:00.928504   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:14:00.936016   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:14:00.943008   61400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:14:00.949401   61400 kubeadm.go:404] StartCluster: {Name:old-k8s-version-927922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-927922 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:14:00.949524   61400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:14:00.949614   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:00.999406   61400 cri.go:89] found id: ""
	I0103 20:14:00.999494   61400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:14:01.011041   61400 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:14:01.011063   61400 kubeadm.go:636] restartCluster start
	I0103 20:14:01.011130   61400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:14:01.024488   61400 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.026094   61400 kubeconfig.go:92] found "old-k8s-version-927922" server: "https://192.168.72.12:8443"
	I0103 20:14:01.029577   61400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:14:01.041599   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.041674   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.055545   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:01.542034   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:01.542135   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:01.554826   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.042049   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.042166   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.056693   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:02.542275   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:02.542363   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:02.557025   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:03.041864   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.041968   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.054402   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:13:59.937077   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.440275   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.287822   62015 pod_ready.go:102] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:00.712464   62015 pod_ready.go:92] pod "etcd-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.712486   62015 pod_ready.go:81] duration metric: took 2.509421629s waiting for pod "etcd-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.712494   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722133   62015 pod_ready.go:92] pod "kube-apiserver-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.722175   62015 pod_ready.go:81] duration metric: took 9.671952ms waiting for pod "kube-apiserver-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.722188   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728860   62015 pod_ready.go:92] pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.728888   62015 pod_ready.go:81] duration metric: took 6.691622ms waiting for pod "kube-controller-manager-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.728901   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736669   62015 pod_ready.go:92] pod "kube-proxy-5hwf4" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:00.736690   62015 pod_ready.go:81] duration metric: took 7.783204ms waiting for pod "kube-proxy-5hwf4" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:00.736699   62015 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245720   62015 pod_ready.go:92] pod "kube-scheduler-no-preload-749210" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:02.245750   62015 pod_ready.go:81] duration metric: took 1.509042822s waiting for pod "kube-scheduler-no-preload-749210" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:02.245764   62015 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:04.253082   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:02.580440   62050 addons.go:508] enable addons completed in 2.256058454s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:02.845486   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:05.343961   62050 node_ready.go:58] node "default-k8s-diff-port-018788" has status "Ready":"False"
	I0103 20:14:03.542326   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:03.542407   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:03.554128   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.041685   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.041779   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.053727   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.542332   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:04.542417   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:04.554478   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.042026   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.042120   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.055763   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:05.541892   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:05.541996   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:05.554974   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.042576   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.042675   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.055902   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:06.542543   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:06.542636   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:06.555494   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.041757   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.041844   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.053440   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:07.542083   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:07.542162   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:07.555336   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:08.041841   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.041929   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.055229   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:04.936356   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.938795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.754049   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:09.253568   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:06.345058   62050 node_ready.go:49] node "default-k8s-diff-port-018788" has status "Ready":"True"
	I0103 20:14:06.345083   62050 node_ready.go:38] duration metric: took 5.505020144s waiting for node "default-k8s-diff-port-018788" to be "Ready" ...
	I0103 20:14:06.345094   62050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:06.351209   62050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357786   62050 pod_ready.go:92] pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:06.357811   62050 pod_ready.go:81] duration metric: took 6.576128ms waiting for pod "coredns-5dd5756b68-zxzqg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:06.357819   62050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:08.365570   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.366402   62050 pod_ready.go:102] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:08.542285   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:08.542428   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:08.554155   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.041695   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.041800   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.054337   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:09.541733   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:09.541817   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:09.554231   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.041785   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.041863   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.053870   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:10.541893   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:10.541988   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:10.554220   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.042579   61400 api_server.go:166] Checking apiserver status ...
	I0103 20:14:11.042662   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:14:11.054683   61400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:14:11.054717   61400 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:14:11.054728   61400 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:14:11.054738   61400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:14:11.054804   61400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:14:11.099741   61400 cri.go:89] found id: ""
	I0103 20:14:11.099806   61400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:14:11.115939   61400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:14:11.125253   61400 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:14:11.125309   61400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134126   61400 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:14:11.134151   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:11.244373   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.026578   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.238755   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.326635   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:12.411494   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:12.411597   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:12.912324   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:09.437304   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.937833   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:11.755341   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.254295   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:10.864860   62050 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.864892   62050 pod_ready.go:81] duration metric: took 4.507065243s waiting for pod "etcd-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.864906   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871510   62050 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.871532   62050 pod_ready.go:81] duration metric: took 6.618246ms waiting for pod "kube-apiserver-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.871542   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877385   62050 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.877411   62050 pod_ready.go:81] duration metric: took 5.859396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.877423   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883355   62050 pod_ready.go:92] pod "kube-proxy-wqjlv" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.883381   62050 pod_ready.go:81] duration metric: took 5.949857ms waiting for pod "kube-proxy-wqjlv" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.883391   62050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888160   62050 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:10.888186   62050 pod_ready.go:81] duration metric: took 4.782893ms waiting for pod "kube-scheduler-default-k8s-diff-port-018788" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:10.888198   62050 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:12.896310   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:14.897306   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:13.412544   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.912006   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:13.939301   61400 api_server.go:72] duration metric: took 1.527807222s to wait for apiserver process to appear ...
	I0103 20:14:13.939328   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:13.939357   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:13.941001   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.438272   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:16.752567   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.758446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:17.397429   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:19.399199   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:18.940403   61400 api_server.go:269] stopped: https://192.168.72.12:8443/healthz: Get "https://192.168.72.12:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0103 20:14:18.940444   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.563874   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.563907   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.563925   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.591366   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:14:19.591397   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:14:19.939684   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:19.951743   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:19.951795   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.439712   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.448251   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0103 20:14:20.448289   61400 api_server.go:103] status: https://192.168.72.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0103 20:14:20.939773   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:20.946227   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:20.954666   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:20.954702   61400 api_server.go:131] duration metric: took 7.015366394s to wait for apiserver health ...
	I0103 20:14:20.954718   61400 cni.go:84] Creating CNI manager for ""
	I0103 20:14:20.954726   61400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:14:20.956786   61400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:14:20.958180   61400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:14:20.969609   61400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:14:20.986353   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:20.996751   61400 system_pods.go:59] 8 kube-system pods found
	I0103 20:14:20.996786   61400 system_pods.go:61] "coredns-5644d7b6d9-99qhg" [d43c98b2-5ed4-42a7-bdb9-28f5b3c7b99f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:14:20.996795   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:20.996804   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:20.996811   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:20.996821   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Pending
	I0103 20:14:20.996828   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:20.996835   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:20.996845   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:20.996857   61400 system_pods.go:74] duration metric: took 10.474644ms to wait for pod list to return data ...
	I0103 20:14:20.996870   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:21.000635   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:21.000665   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:21.000677   61400 node_conditions.go:105] duration metric: took 3.80125ms to run NodePressure ...
	I0103 20:14:21.000698   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:14:21.233310   61400 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241408   61400 kubeadm.go:787] kubelet initialised
	I0103 20:14:21.241445   61400 kubeadm.go:788] duration metric: took 8.096237ms waiting for restarted kubelet to initialise ...
	I0103 20:14:21.241456   61400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:21.251897   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.264624   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264657   61400 pod_ready.go:81] duration metric: took 12.728783ms waiting for pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.264670   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-99qhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.264700   61400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.282371   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282400   61400 pod_ready.go:81] duration metric: took 17.657706ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.282410   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.282416   61400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.288986   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289016   61400 pod_ready.go:81] duration metric: took 6.590018ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.289028   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "etcd-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.289036   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.391318   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391358   61400 pod_ready.go:81] duration metric: took 102.309139ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.391371   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.391390   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:21.790147   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790184   61400 pod_ready.go:81] duration metric: took 398.776559ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:21.790202   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:21.790213   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.190088   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190118   61400 pod_ready.go:81] duration metric: took 399.895826ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.190132   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-proxy-jk7jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.190146   61400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:22.590412   61400 pod_ready.go:97] node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590470   61400 pod_ready.go:81] duration metric: took 400.308646ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	E0103 20:14:22.590484   61400 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-927922" hosting pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:22.590494   61400 pod_ready.go:38] duration metric: took 1.349028144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:22.590533   61400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:14:22.610035   61400 ops.go:34] apiserver oom_adj: -16
	I0103 20:14:22.610060   61400 kubeadm.go:640] restartCluster took 21.598991094s
	I0103 20:14:22.610071   61400 kubeadm.go:406] StartCluster complete in 21.660680377s
	I0103 20:14:22.610091   61400 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.610178   61400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:14:22.613053   61400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:14:22.613314   61400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:14:22.613472   61400 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:14:22.613563   61400 config.go:182] Loaded profile config "old-k8s-version-927922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 20:14:22.613570   61400 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613584   61400 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613597   61400 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-927922"
	I0103 20:14:22.613625   61400 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-927922"
	W0103 20:14:22.613637   61400 addons.go:246] addon metrics-server should already be in state true
	I0103 20:14:22.613639   61400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-927922"
	I0103 20:14:22.613605   61400 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-927922"
	W0103 20:14:22.613706   61400 addons.go:246] addon storage-provisioner should already be in state true
	I0103 20:14:22.613769   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.613691   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.614097   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614129   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614170   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614204   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.614293   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.614334   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.631032   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0103 20:14:22.631689   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.632149   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.632172   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.632553   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.632811   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0103 20:14:22.632820   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0103 20:14:22.633222   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633340   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.633352   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.633385   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.633695   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.633719   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634106   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634117   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.634139   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.634544   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.634711   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.634782   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.634821   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.639076   61400 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-927922"
	W0103 20:14:22.639233   61400 addons.go:246] addon default-storageclass should already be in state true
	I0103 20:14:22.639274   61400 host.go:66] Checking if "old-k8s-version-927922" exists ...
	I0103 20:14:22.640636   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.640703   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.653581   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0103 20:14:22.654135   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.654693   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.654720   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.655050   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.655267   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.655611   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45149
	I0103 20:14:22.656058   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.656503   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.656527   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.656976   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.657189   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.657904   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.660090   61400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:14:22.659044   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.659283   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0103 20:14:22.663010   61400 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.663022   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:14:22.663037   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.664758   61400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 20:14:22.663341   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.665665   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666177   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.666201   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.666255   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 20:14:22.666266   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 20:14:22.666282   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.666382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.666505   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.666726   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.666884   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.666901   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.666926   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.667344   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.667940   61400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:14:22.667983   61400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:14:22.668718   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.668933   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.668961   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.669116   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.669262   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.669388   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.669506   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.711545   61400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0103 20:14:22.711969   61400 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:14:22.712493   61400 main.go:141] libmachine: Using API Version  1
	I0103 20:14:22.712519   61400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:14:22.712853   61400 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:14:22.713077   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetState
	I0103 20:14:22.715086   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .DriverName
	I0103 20:14:22.715371   61400 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.715390   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:14:22.715405   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHHostname
	I0103 20:14:22.718270   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718638   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:79:06", ip: ""} in network mk-old-k8s-version-927922: {Iface:virbr3 ExpiryTime:2024-01-03 21:03:09 +0000 UTC Type:0 Mac:52:54:00:61:79:06 Iaid: IPaddr:192.168.72.12 Prefix:24 Hostname:old-k8s-version-927922 Clientid:01:52:54:00:61:79:06}
	I0103 20:14:22.718671   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | domain old-k8s-version-927922 has defined IP address 192.168.72.12 and MAC address 52:54:00:61:79:06 in network mk-old-k8s-version-927922
	I0103 20:14:22.718876   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHPort
	I0103 20:14:22.719076   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHKeyPath
	I0103 20:14:22.719263   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .GetSSHUsername
	I0103 20:14:22.719451   61400 sshutil.go:53] new ssh client: &{IP:192.168.72.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/old-k8s-version-927922/id_rsa Username:docker}
	I0103 20:14:22.795601   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:14:22.887631   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 20:14:22.887660   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 20:14:22.889717   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:14:22.932293   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 20:14:22.932324   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 20:14:22.939480   61400 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:14:22.979425   61400 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:22.979455   61400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 20:14:23.017495   61400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 20:14:23.255786   61400 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-927922" context rescaled to 1 replicas
	I0103 20:14:23.255832   61400 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:14:23.257785   61400 out.go:177] * Verifying Kubernetes components...
	I0103 20:14:18.937821   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.435750   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.438082   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.259380   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:23.430371   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430402   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430532   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430557   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.430710   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.430741   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.430778   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.430798   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.430806   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432333   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432345   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432353   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432363   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432373   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.432382   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.432383   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432394   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.432602   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.432654   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.432674   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438313   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.438335   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.438566   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.438585   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.438662   61400 main.go:141] libmachine: (old-k8s-version-927922) DBG | Closing plugin on server side
	I0103 20:14:23.598304   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598338   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598363   61400 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:23.598669   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598687   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598696   61400 main.go:141] libmachine: Making call to close driver server
	I0103 20:14:23.598705   61400 main.go:141] libmachine: (old-k8s-version-927922) Calling .Close
	I0103 20:14:23.598917   61400 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:14:23.598938   61400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:14:23.598960   61400 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-927922"
	I0103 20:14:23.601038   61400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0103 20:14:21.253707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.254276   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:21.399352   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.895781   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:23.602562   61400 addons.go:508] enable addons completed in 989.095706ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0103 20:14:25.602268   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:27.602561   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:25.439366   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:27.934938   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:25.753982   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.253688   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:26.398696   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:28.896789   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:29.603040   61400 node_ready.go:58] node "old-k8s-version-927922" has status "Ready":"False"
	I0103 20:14:30.102640   61400 node_ready.go:49] node "old-k8s-version-927922" has status "Ready":"True"
	I0103 20:14:30.102663   61400 node_ready.go:38] duration metric: took 6.504277703s waiting for node "old-k8s-version-927922" to be "Ready" ...
	I0103 20:14:30.102672   61400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.107593   61400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112792   61400 pod_ready.go:92] pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.112817   61400 pod_ready.go:81] duration metric: took 5.195453ms waiting for pod "coredns-5644d7b6d9-nvbsl" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.112828   61400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117802   61400 pod_ready.go:92] pod "etcd-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.117827   61400 pod_ready.go:81] duration metric: took 4.989616ms waiting for pod "etcd-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.117839   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123548   61400 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.123570   61400 pod_ready.go:81] duration metric: took 5.723206ms waiting for pod "kube-apiserver-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.123580   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128232   61400 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.128257   61400 pod_ready.go:81] duration metric: took 4.670196ms waiting for pod "kube-controller-manager-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.128269   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503735   61400 pod_ready.go:92] pod "kube-proxy-jk7jw" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.503782   61400 pod_ready.go:81] duration metric: took 375.504442ms waiting for pod "kube-proxy-jk7jw" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.503796   61400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903117   61400 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace has status "Ready":"True"
	I0103 20:14:30.903145   61400 pod_ready.go:81] duration metric: took 399.341883ms waiting for pod "kube-scheduler-old-k8s-version-927922" in "kube-system" namespace to be "Ready" ...
	I0103 20:14:30.903155   61400 pod_ready.go:38] duration metric: took 800.474934ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:14:30.903167   61400 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:14:30.903215   61400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:14:30.917506   61400 api_server.go:72] duration metric: took 7.661640466s to wait for apiserver process to appear ...
	I0103 20:14:30.917537   61400 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:14:30.917558   61400 api_server.go:253] Checking apiserver healthz at https://192.168.72.12:8443/healthz ...
	I0103 20:14:30.923921   61400 api_server.go:279] https://192.168.72.12:8443/healthz returned 200:
	ok
	I0103 20:14:30.924810   61400 api_server.go:141] control plane version: v1.16.0
	I0103 20:14:30.924830   61400 api_server.go:131] duration metric: took 7.286806ms to wait for apiserver health ...
	I0103 20:14:30.924837   61400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:14:31.105108   61400 system_pods.go:59] 7 kube-system pods found
	I0103 20:14:31.105140   61400 system_pods.go:61] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.105144   61400 system_pods.go:61] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.105149   61400 system_pods.go:61] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.105153   61400 system_pods.go:61] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.105156   61400 system_pods.go:61] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.105160   61400 system_pods.go:61] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.105164   61400 system_pods.go:61] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.105168   61400 system_pods.go:74] duration metric: took 180.326535ms to wait for pod list to return data ...
	I0103 20:14:31.105176   61400 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:14:31.303919   61400 default_sa.go:45] found service account: "default"
	I0103 20:14:31.303945   61400 default_sa.go:55] duration metric: took 198.763782ms for default service account to be created ...
	I0103 20:14:31.303952   61400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:14:31.504913   61400 system_pods.go:86] 7 kube-system pods found
	I0103 20:14:31.504942   61400 system_pods.go:89] "coredns-5644d7b6d9-nvbsl" [22884cc1-f360-4ee8-bafc-340bb24faa41] Running
	I0103 20:14:31.504948   61400 system_pods.go:89] "etcd-old-k8s-version-927922" [f395d0d3-416a-4915-b587-6e51eb8648a2] Running
	I0103 20:14:31.504952   61400 system_pods.go:89] "kube-apiserver-old-k8s-version-927922" [c62c011b-74fa-440c-9ff9-56721cb1a58d] Running
	I0103 20:14:31.504960   61400 system_pods.go:89] "kube-controller-manager-old-k8s-version-927922" [3d85024c-8cc4-4a99-b8b7-2151c10918f7] Running
	I0103 20:14:31.504964   61400 system_pods.go:89] "kube-proxy-jk7jw" [ef720f69-1bfd-4e75-9943-ff7ee3145ecc] Running
	I0103 20:14:31.504967   61400 system_pods.go:89] "kube-scheduler-old-k8s-version-927922" [74ed1414-7a76-45bd-9c0e-e4c9670d4c1b] Running
	I0103 20:14:31.504971   61400 system_pods.go:89] "storage-provisioner" [4157ff41-1b3b-4eb7-b23b-2de69398161c] Running
	I0103 20:14:31.504978   61400 system_pods.go:126] duration metric: took 201.020363ms to wait for k8s-apps to be running ...
	I0103 20:14:31.504987   61400 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:14:31.505042   61400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:14:31.519544   61400 system_svc.go:56] duration metric: took 14.547054ms WaitForService to wait for kubelet.
	I0103 20:14:31.519581   61400 kubeadm.go:581] duration metric: took 8.263723255s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:14:31.519604   61400 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:14:31.703367   61400 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:14:31.703393   61400 node_conditions.go:123] node cpu capacity is 2
	I0103 20:14:31.703402   61400 node_conditions.go:105] duration metric: took 183.794284ms to run NodePressure ...
	I0103 20:14:31.703413   61400 start.go:228] waiting for startup goroutines ...
	I0103 20:14:31.703419   61400 start.go:233] waiting for cluster config update ...
	I0103 20:14:31.703427   61400 start.go:242] writing updated cluster config ...
	I0103 20:14:31.703726   61400 ssh_runner.go:195] Run: rm -f paused
	I0103 20:14:31.752491   61400 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0103 20:14:31.754609   61400 out.go:177] 
	W0103 20:14:31.756132   61400 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0103 20:14:31.757531   61400 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0103 20:14:31.758908   61400 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-927922" cluster and "default" namespace by default
	I0103 20:14:29.937557   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.437025   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.253875   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:32.752584   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:30.898036   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:33.398935   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.936535   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.436533   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:34.753233   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.252419   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.253992   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:35.896170   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:37.897520   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:40.397608   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:39.438748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.439514   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:41.254480   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.756719   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:42.397869   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:44.398305   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:43.935597   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:45.936279   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:47.939184   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.253445   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:48.254497   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:46.896653   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:49.395106   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.436008   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:52.436929   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:50.754391   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.253984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:51.396664   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:53.895621   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:54.937380   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.435980   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:55.254262   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:57.254379   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:56.399473   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:58.895378   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.436517   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:01.436644   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:03.437289   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:14:59.754343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.256605   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:00.896080   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:02.896456   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.396614   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:05.935218   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.936528   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:04.753320   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:06.753702   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:08.754470   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:07.909774   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.398298   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.435847   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.436285   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:10.755735   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:13.260340   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:12.898368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.395141   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:14.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:16.437752   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:15.753850   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.252984   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:17.396224   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:19.396412   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:18.935744   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.936627   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:22.937157   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:20.753996   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.252893   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:21.396466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:23.396556   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.435441   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.437177   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.253294   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.257573   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:25.895526   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:27.897999   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:30.396749   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.935811   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:31.936769   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:29.754895   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.252296   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.252439   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:32.398706   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.895914   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:34.435649   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.435937   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.253151   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.753045   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:36.897764   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:39.395522   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:38.935209   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.935922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:42.936185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:40.753242   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.254160   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:41.395722   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:43.895476   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:44.938043   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.436185   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.753607   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.757575   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:45.895628   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:47.898831   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.395366   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:49.437057   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:51.936658   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:50.254313   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.754096   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:52.396047   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:54.896005   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:53.937359   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.939092   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:58.435858   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:55.253159   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:57.752873   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:56.897368   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.397094   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:00.937099   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:02.937220   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:15:59.753924   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.754227   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:04.253189   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:01.895645   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:03.895950   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:05.435964   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:07.437247   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.753405   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.252564   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:06.395775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:08.397119   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:09.937945   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:12.436531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:11.254482   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.753409   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:10.898350   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:13.397549   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:14.936753   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.438482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.753689   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:18.253420   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:15.895365   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:17.897998   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.898464   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:19.935559   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:21.935664   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:20.253748   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.253878   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.254457   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:22.395466   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:24.400100   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:23.935958   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:25.936631   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:28.436748   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.752881   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.253740   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:26.897228   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:29.396925   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:30.436921   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:32.939573   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.254681   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.759891   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:31.895948   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:33.899819   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:35.436828   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:37.437536   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.252972   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.254083   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:36.396572   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:38.895816   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:39.440085   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:41.939589   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.752960   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:42.753342   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:40.897788   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:43.396277   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.437295   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:46.934854   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:44.753613   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.253118   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:45.896539   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:47.897012   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:50.399452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:48.936795   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:51.435353   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:53.436742   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:49.753890   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.252908   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.253390   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:52.895504   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:54.896960   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:55.937358   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.435997   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.256446   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.754312   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:56.898710   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:16:58.899652   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:00.437252   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:02.936336   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.254343   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.754483   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:01.398833   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:03.896269   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.437531   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.935848   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.755471   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:07.756171   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:05.897369   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:08.397436   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:09.936237   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:11.940482   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.253599   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:12.254176   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.254316   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:10.898370   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:13.400421   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:14.436922   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.936283   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:16.753503   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.253120   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:15.896003   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:18.396552   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:19.438479   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.936957   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:21.253522   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.752947   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:20.895961   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:23.395452   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:24.435005   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437797   61676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:26.437828   61676 pod_ready.go:81] duration metric: took 4m0.009294112s waiting for pod "metrics-server-57f55c9bc5-sm8rb" in "kube-system" namespace to be "Ready" ...
	E0103 20:17:26.437841   61676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:17:26.437850   61676 pod_ready.go:38] duration metric: took 4m1.606787831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:17:26.437868   61676 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:17:26.437901   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:26.437951   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:26.499917   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.499942   61676 cri.go:89] found id: ""
	I0103 20:17:26.499958   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:26.500014   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.504239   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:26.504290   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:26.539965   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.539992   61676 cri.go:89] found id: ""
	I0103 20:17:26.540001   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:26.540052   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.544591   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:26.544667   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:26.583231   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:26.583256   61676 cri.go:89] found id: ""
	I0103 20:17:26.583265   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:26.583328   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.587642   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:26.587705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:26.625230   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.625258   61676 cri.go:89] found id: ""
	I0103 20:17:26.625267   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:26.625329   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.629448   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:26.629527   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:26.666698   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:26.666726   61676 cri.go:89] found id: ""
	I0103 20:17:26.666736   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:26.666796   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.671434   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:26.671500   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:26.703900   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:26.703921   61676 cri.go:89] found id: ""
	I0103 20:17:26.703929   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:26.703986   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.707915   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:26.707979   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:26.747144   61676 cri.go:89] found id: ""
	I0103 20:17:26.747168   61676 logs.go:284] 0 containers: []
	W0103 20:17:26.747182   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:26.747189   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:26.747246   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:26.786418   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:26.786441   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:26.786448   61676 cri.go:89] found id: ""
	I0103 20:17:26.786456   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:26.786515   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.790506   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:26.794304   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:26.794330   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:26.851272   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:26.851317   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:26.894480   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:26.894508   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:26.941799   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:26.941826   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:26.981759   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:26.981793   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:27.021318   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:27.021347   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:27.061320   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:27.061351   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:27.110137   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:27.110169   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:27.123548   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:27.123582   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:27.162644   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:27.162678   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:27.211599   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:27.211636   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:27.361299   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:27.361329   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:27.866123   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:27.866166   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:25.753957   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:27.754559   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:25.896204   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:28.395594   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.418870   61676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:17:30.433778   61676 api_server.go:72] duration metric: took 4m12.637164197s to wait for apiserver process to appear ...
	I0103 20:17:30.433801   61676 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:17:30.433838   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:30.433911   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:30.472309   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:30.472337   61676 cri.go:89] found id: ""
	I0103 20:17:30.472348   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:30.472407   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.476787   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:30.476858   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:30.522290   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:30.522322   61676 cri.go:89] found id: ""
	I0103 20:17:30.522334   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:30.522390   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.526502   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:30.526581   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:30.568301   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.568328   61676 cri.go:89] found id: ""
	I0103 20:17:30.568335   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:30.568382   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.572398   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:30.572454   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:30.611671   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:30.611694   61676 cri.go:89] found id: ""
	I0103 20:17:30.611702   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:30.611749   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.615971   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:30.616035   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:30.658804   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:30.658830   61676 cri.go:89] found id: ""
	I0103 20:17:30.658839   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:30.658889   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.662859   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:30.662930   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:30.705941   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:30.705968   61676 cri.go:89] found id: ""
	I0103 20:17:30.705976   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:30.706031   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.710228   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:30.710308   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:30.749052   61676 cri.go:89] found id: ""
	I0103 20:17:30.749077   61676 logs.go:284] 0 containers: []
	W0103 20:17:30.749088   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:30.749096   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:30.749157   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:30.786239   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.786267   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.786273   61676 cri.go:89] found id: ""
	I0103 20:17:30.786280   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:30.786341   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.790680   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:30.794294   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:30.794320   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:30.835916   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:30.835952   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:30.876225   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:30.876255   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:30.917657   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:30.917684   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:30.930805   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:30.930831   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:31.060049   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:31.060086   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:31.119725   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:31.119754   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:31.164226   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:31.164261   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:31.204790   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:31.204816   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:31.264949   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:31.264984   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:31.658178   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:31.658217   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:31.712090   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:31.712125   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:31.757333   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:31.757364   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:30.253170   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.753056   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:30.896380   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:32.896512   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:35.399775   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:34.304692   61676 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I0103 20:17:34.311338   61676 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I0103 20:17:34.312603   61676 api_server.go:141] control plane version: v1.28.4
	I0103 20:17:34.312624   61676 api_server.go:131] duration metric: took 3.878815002s to wait for apiserver health ...
	I0103 20:17:34.312632   61676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:17:34.312651   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:17:34.312705   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:17:34.347683   61676 cri.go:89] found id: "b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.347701   61676 cri.go:89] found id: ""
	I0103 20:17:34.347711   61676 logs.go:284] 1 containers: [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6]
	I0103 20:17:34.347769   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.351695   61676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:17:34.351773   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:17:34.386166   61676 cri.go:89] found id: "d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.386188   61676 cri.go:89] found id: ""
	I0103 20:17:34.386197   61676 logs.go:284] 1 containers: [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40]
	I0103 20:17:34.386259   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.390352   61676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:17:34.390427   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:17:34.427772   61676 cri.go:89] found id: "e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.427801   61676 cri.go:89] found id: ""
	I0103 20:17:34.427811   61676 logs.go:284] 1 containers: [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b]
	I0103 20:17:34.427872   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.432258   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:17:34.432324   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:17:34.471746   61676 cri.go:89] found id: "91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.471789   61676 cri.go:89] found id: ""
	I0103 20:17:34.471812   61676 logs.go:284] 1 containers: [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d]
	I0103 20:17:34.471878   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.476656   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:17:34.476729   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:17:34.514594   61676 cri.go:89] found id: "a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:34.514626   61676 cri.go:89] found id: ""
	I0103 20:17:34.514685   61676 logs.go:284] 1 containers: [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf]
	I0103 20:17:34.514779   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.518789   61676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:17:34.518849   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:17:34.555672   61676 cri.go:89] found id: "8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.555698   61676 cri.go:89] found id: ""
	I0103 20:17:34.555707   61676 logs.go:284] 1 containers: [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523]
	I0103 20:17:34.555771   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.560278   61676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:17:34.560338   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:17:34.598718   61676 cri.go:89] found id: ""
	I0103 20:17:34.598742   61676 logs.go:284] 0 containers: []
	W0103 20:17:34.598753   61676 logs.go:286] No container was found matching "kindnet"
	I0103 20:17:34.598759   61676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:17:34.598810   61676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:17:34.635723   61676 cri.go:89] found id: "0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:34.635751   61676 cri.go:89] found id: "3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:34.635758   61676 cri.go:89] found id: ""
	I0103 20:17:34.635767   61676 logs.go:284] 2 containers: [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2]
	I0103 20:17:34.635814   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.640466   61676 ssh_runner.go:195] Run: which crictl
	I0103 20:17:34.644461   61676 logs.go:123] Gathering logs for dmesg ...
	I0103 20:17:34.644490   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:17:34.659819   61676 logs.go:123] Gathering logs for coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] ...
	I0103 20:17:34.659856   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b"
	I0103 20:17:34.697807   61676 logs.go:123] Gathering logs for kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] ...
	I0103 20:17:34.697840   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d"
	I0103 20:17:34.745366   61676 logs.go:123] Gathering logs for kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] ...
	I0103 20:17:34.745397   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523"
	I0103 20:17:34.804885   61676 logs.go:123] Gathering logs for kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] ...
	I0103 20:17:34.804919   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6"
	I0103 20:17:34.848753   61676 logs.go:123] Gathering logs for etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] ...
	I0103 20:17:34.848784   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40"
	I0103 20:17:34.891492   61676 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:17:34.891524   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:17:35.234093   61676 logs.go:123] Gathering logs for kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] ...
	I0103 20:17:35.234133   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf"
	I0103 20:17:35.281396   61676 logs.go:123] Gathering logs for storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] ...
	I0103 20:17:35.281425   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719"
	I0103 20:17:35.317595   61676 logs.go:123] Gathering logs for storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] ...
	I0103 20:17:35.317622   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2"
	I0103 20:17:35.357552   61676 logs.go:123] Gathering logs for container status ...
	I0103 20:17:35.357600   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:17:35.405369   61676 logs.go:123] Gathering logs for kubelet ...
	I0103 20:17:35.405394   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:17:35.459496   61676 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:17:35.459535   61676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:17:38.101844   61676 system_pods.go:59] 8 kube-system pods found
	I0103 20:17:38.101870   61676 system_pods.go:61] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.101875   61676 system_pods.go:61] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.101879   61676 system_pods.go:61] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.101886   61676 system_pods.go:61] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.101892   61676 system_pods.go:61] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.101898   61676 system_pods.go:61] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.101907   61676 system_pods.go:61] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.101919   61676 system_pods.go:61] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.101931   61676 system_pods.go:74] duration metric: took 3.789293156s to wait for pod list to return data ...
	I0103 20:17:38.101940   61676 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:17:38.104360   61676 default_sa.go:45] found service account: "default"
	I0103 20:17:38.104386   61676 default_sa.go:55] duration metric: took 2.437157ms for default service account to be created ...
	I0103 20:17:38.104395   61676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:17:38.110198   61676 system_pods.go:86] 8 kube-system pods found
	I0103 20:17:38.110226   61676 system_pods.go:89] "coredns-5dd5756b68-sx6gg" [6a4ea161-1a32-4c3b-9a0d-b4c596492d8b] Running
	I0103 20:17:38.110233   61676 system_pods.go:89] "etcd-embed-certs-451331" [01d6441d-5e39-405a-81df-c2ed1e28cf0b] Running
	I0103 20:17:38.110241   61676 system_pods.go:89] "kube-apiserver-embed-certs-451331" [ed38f120-6a1a-48e7-9346-f792f2e13cfc] Running
	I0103 20:17:38.110247   61676 system_pods.go:89] "kube-controller-manager-embed-certs-451331" [4ca17ea6-a7e6-425b-98ba-7f917ceb91a0] Running
	I0103 20:17:38.110254   61676 system_pods.go:89] "kube-proxy-fsnb9" [d1f00cf1-e9c4-442b-a6b3-b633252b840c] Running
	I0103 20:17:38.110262   61676 system_pods.go:89] "kube-scheduler-embed-certs-451331" [00ec8091-7ed7-40b0-8b63-1c548fa8632d] Running
	I0103 20:17:38.110272   61676 system_pods.go:89] "metrics-server-57f55c9bc5-sm8rb" [12b9f83d-abf8-431c-a271-b8489d32f0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:17:38.110287   61676 system_pods.go:89] "storage-provisioner" [cbce49e7-cef5-40a1-a017-906fcc77ef66] Running
	I0103 20:17:38.110300   61676 system_pods.go:126] duration metric: took 5.897003ms to wait for k8s-apps to be running ...
	I0103 20:17:38.110310   61676 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:17:38.110359   61676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:17:38.129025   61676 system_svc.go:56] duration metric: took 18.705736ms WaitForService to wait for kubelet.
	I0103 20:17:38.129071   61676 kubeadm.go:581] duration metric: took 4m20.332460734s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:17:38.129104   61676 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:17:38.132674   61676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:17:38.132703   61676 node_conditions.go:123] node cpu capacity is 2
	I0103 20:17:38.132718   61676 node_conditions.go:105] duration metric: took 3.608193ms to run NodePressure ...
	I0103 20:17:38.132803   61676 start.go:228] waiting for startup goroutines ...
	I0103 20:17:38.132830   61676 start.go:233] waiting for cluster config update ...
	I0103 20:17:38.132846   61676 start.go:242] writing updated cluster config ...
	I0103 20:17:38.133198   61676 ssh_runner.go:195] Run: rm -f paused
	I0103 20:17:38.185728   61676 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:17:38.187862   61676 out.go:177] * Done! kubectl is now configured to use "embed-certs-451331" cluster and "default" namespace by default
	I0103 20:17:34.753175   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.254091   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:37.896317   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:40.396299   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:39.752580   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:41.755418   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:44.253073   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:42.897389   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:45.396646   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:46.253958   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:48.753284   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:47.398164   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:49.895246   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:50.754133   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.253046   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:51.895627   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:53.897877   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:55.254029   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:57.752707   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:56.398655   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:58.897483   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:17:59.753306   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:01.753500   62015 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:02.255901   62015 pod_ready.go:81] duration metric: took 4m0.010124972s waiting for pod "metrics-server-57f55c9bc5-tqn5m" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:02.255929   62015 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:02.255939   62015 pod_ready.go:38] duration metric: took 4m4.070078749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:02.255957   62015 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:02.255989   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:02.256064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:02.312578   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:02.312606   62015 cri.go:89] found id: ""
	I0103 20:18:02.312616   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:02.312679   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.317969   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:02.318064   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:02.361423   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.361451   62015 cri.go:89] found id: ""
	I0103 20:18:02.361464   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:02.361527   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.365691   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:02.365772   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:02.415087   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:02.415118   62015 cri.go:89] found id: ""
	I0103 20:18:02.415128   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:02.415188   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.419409   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:02.419493   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:02.459715   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:02.459744   62015 cri.go:89] found id: ""
	I0103 20:18:02.459754   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:02.459816   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.464105   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:02.464186   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:02.515523   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:02.515547   62015 cri.go:89] found id: ""
	I0103 20:18:02.515556   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:02.515619   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.519586   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:02.519646   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:02.561187   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.561210   62015 cri.go:89] found id: ""
	I0103 20:18:02.561219   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:02.561288   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.566206   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:02.566289   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:02.610993   62015 cri.go:89] found id: ""
	I0103 20:18:02.611019   62015 logs.go:284] 0 containers: []
	W0103 20:18:02.611029   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:02.611036   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:02.611111   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:02.651736   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:02.651764   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:02.651771   62015 cri.go:89] found id: ""
	I0103 20:18:02.651779   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:02.651839   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.656284   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:02.660614   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:02.660636   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:02.707759   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:02.707804   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:02.766498   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:02.766551   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:03.227838   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:03.227884   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:03.269131   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:03.269174   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:03.307383   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:03.307410   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:03.362005   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:03.362043   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:03.412300   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:03.412333   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:03.448896   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:03.448922   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:03.587950   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:03.587982   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:03.629411   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:03.629438   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:03.672468   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:03.672499   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:03.685645   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:03.685682   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:01.395689   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:03.396256   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:06.229417   62015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:06.244272   62015 api_server.go:72] duration metric: took 4m15.901019711s to wait for apiserver process to appear ...
	I0103 20:18:06.244306   62015 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:06.244351   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:06.244412   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:06.292204   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.292235   62015 cri.go:89] found id: ""
	I0103 20:18:06.292246   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:06.292309   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.296724   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:06.296791   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:06.333984   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.334012   62015 cri.go:89] found id: ""
	I0103 20:18:06.334023   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:06.334079   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.338045   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:06.338123   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:06.374586   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:06.374610   62015 cri.go:89] found id: ""
	I0103 20:18:06.374617   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:06.374669   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.378720   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:06.378792   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:06.416220   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:06.416240   62015 cri.go:89] found id: ""
	I0103 20:18:06.416247   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:06.416300   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.420190   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:06.420247   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:06.458725   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:06.458745   62015 cri.go:89] found id: ""
	I0103 20:18:06.458754   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:06.458808   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.462703   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:06.462759   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:06.504559   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:06.504587   62015 cri.go:89] found id: ""
	I0103 20:18:06.504596   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:06.504659   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.508602   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:06.508662   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:06.559810   62015 cri.go:89] found id: ""
	I0103 20:18:06.559833   62015 logs.go:284] 0 containers: []
	W0103 20:18:06.559840   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:06.559846   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:06.559905   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:06.598672   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.598697   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.598704   62015 cri.go:89] found id: ""
	I0103 20:18:06.598712   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:06.598766   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.602828   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:06.607033   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:06.607050   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:06.758606   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:06.758634   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:06.797521   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:06.797552   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:06.856126   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:06.856159   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:06.902629   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:06.902656   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:06.953115   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:06.953154   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:06.993311   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:06.993342   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:07.393614   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:07.393655   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:07.408367   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:07.408397   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:07.446725   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:07.446756   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:07.494564   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:07.494595   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:07.529151   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:07.529176   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:07.577090   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:07.577118   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:05.895682   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:08.395751   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.396488   62050 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace has status "Ready":"False"
	I0103 20:18:10.133806   62015 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0103 20:18:10.138606   62015 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0103 20:18:10.139965   62015 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:18:10.139986   62015 api_server.go:131] duration metric: took 3.895673488s to wait for apiserver health ...
	I0103 20:18:10.140004   62015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:10.140032   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.140078   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.177309   62015 cri.go:89] found id: "fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.177336   62015 cri.go:89] found id: ""
	I0103 20:18:10.177347   62015 logs.go:284] 1 containers: [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b]
	I0103 20:18:10.177398   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.181215   62015 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.181287   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:10.217151   62015 cri.go:89] found id: "f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.217174   62015 cri.go:89] found id: ""
	I0103 20:18:10.217183   62015 logs.go:284] 1 containers: [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748]
	I0103 20:18:10.217242   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.221363   62015 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:10.221447   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:10.271359   62015 cri.go:89] found id: "b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:10.271387   62015 cri.go:89] found id: ""
	I0103 20:18:10.271397   62015 logs.go:284] 1 containers: [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a]
	I0103 20:18:10.271460   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.277366   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:10.277439   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:10.325567   62015 cri.go:89] found id: "03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:10.325594   62015 cri.go:89] found id: ""
	I0103 20:18:10.325604   62015 logs.go:284] 1 containers: [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893]
	I0103 20:18:10.325662   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.331222   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:10.331292   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:10.370488   62015 cri.go:89] found id: "250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:10.370516   62015 cri.go:89] found id: ""
	I0103 20:18:10.370539   62015 logs.go:284] 1 containers: [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8]
	I0103 20:18:10.370598   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.375213   62015 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:10.375272   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:10.417606   62015 cri.go:89] found id: "67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.417626   62015 cri.go:89] found id: ""
	I0103 20:18:10.417633   62015 logs.go:284] 1 containers: [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85]
	I0103 20:18:10.417678   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.421786   62015 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:10.421848   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:10.459092   62015 cri.go:89] found id: ""
	I0103 20:18:10.459119   62015 logs.go:284] 0 containers: []
	W0103 20:18:10.459129   62015 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:10.459136   62015 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:10.459184   62015 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:10.504845   62015 cri.go:89] found id: "08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:10.504874   62015 cri.go:89] found id: "367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.504879   62015 cri.go:89] found id: ""
	I0103 20:18:10.504886   62015 logs.go:284] 2 containers: [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d]
	I0103 20:18:10.504935   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.509189   62015 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.513671   62015 logs.go:123] Gathering logs for storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] ...
	I0103 20:18:10.513692   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d"
	I0103 20:18:10.553961   62015 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:10.553988   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:10.606422   62015 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:10.606463   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:10.620647   62015 logs.go:123] Gathering logs for kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] ...
	I0103 20:18:10.620677   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85"
	I0103 20:18:10.678322   62015 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:10.678358   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:10.806514   62015 logs.go:123] Gathering logs for container status ...
	I0103 20:18:10.806569   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:10.862551   62015 logs.go:123] Gathering logs for kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] ...
	I0103 20:18:10.862589   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b"
	I0103 20:18:10.917533   62015 logs.go:123] Gathering logs for etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] ...
	I0103 20:18:10.917566   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748"
	I0103 20:18:10.988668   62015 logs.go:123] Gathering logs for storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] ...
	I0103 20:18:10.988702   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052"
	I0103 20:18:11.030485   62015 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.030549   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:11.425651   62015 logs.go:123] Gathering logs for coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] ...
	I0103 20:18:11.425686   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a"
	I0103 20:18:11.481991   62015 logs.go:123] Gathering logs for kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] ...
	I0103 20:18:11.482019   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893"
	I0103 20:18:11.526299   62015 logs.go:123] Gathering logs for kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] ...
	I0103 20:18:11.526335   62015 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8"
	I0103 20:18:14.082821   62015 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:14.082847   62015 system_pods.go:61] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.082853   62015 system_pods.go:61] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.082857   62015 system_pods.go:61] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.082861   62015 system_pods.go:61] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.082865   62015 system_pods.go:61] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.082870   62015 system_pods.go:61] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.082876   62015 system_pods.go:61] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.082881   62015 system_pods.go:61] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.082887   62015 system_pods.go:74] duration metric: took 3.942878112s to wait for pod list to return data ...
	I0103 20:18:14.082893   62015 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:14.087079   62015 default_sa.go:45] found service account: "default"
	I0103 20:18:14.087106   62015 default_sa.go:55] duration metric: took 4.207195ms for default service account to be created ...
	I0103 20:18:14.087115   62015 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:14.094161   62015 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:14.094185   62015 system_pods.go:89] "coredns-76f75df574-rbx58" [d5e91e6a-e3f9-4dbc-83ff-3069cb67847c] Running
	I0103 20:18:14.094190   62015 system_pods.go:89] "etcd-no-preload-749210" [3cfe84f3-28bd-490f-a7fc-152c1b9784ce] Running
	I0103 20:18:14.094195   62015 system_pods.go:89] "kube-apiserver-no-preload-749210" [1d9d03fa-23c6-4432-b7ec-905fcab8a628] Running
	I0103 20:18:14.094199   62015 system_pods.go:89] "kube-controller-manager-no-preload-749210" [4e4207ef-8844-4547-88a4-b12026250554] Running
	I0103 20:18:14.094204   62015 system_pods.go:89] "kube-proxy-5hwf4" [98fafdf5-9a74-4c9f-96eb-20064c72c4e1] Running
	I0103 20:18:14.094208   62015 system_pods.go:89] "kube-scheduler-no-preload-749210" [21e70024-26b0-4740-ba52-99893ca20809] Running
	I0103 20:18:14.094219   62015 system_pods.go:89] "metrics-server-57f55c9bc5-tqn5m" [8cc1dc91-fafb-4405-8820-a7f99ccbbb0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:14.094231   62015 system_pods.go:89] "storage-provisioner" [1bf4f1d7-c083-47e7-9976-76bbc72e7bff] Running
	I0103 20:18:14.094244   62015 system_pods.go:126] duration metric: took 7.123869ms to wait for k8s-apps to be running ...
	I0103 20:18:14.094256   62015 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:14.094305   62015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:14.110365   62015 system_svc.go:56] duration metric: took 16.099582ms WaitForService to wait for kubelet.
	I0103 20:18:14.110400   62015 kubeadm.go:581] duration metric: took 4m23.767155373s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:14.110439   62015 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:14.113809   62015 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:14.113833   62015 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:14.113842   62015 node_conditions.go:105] duration metric: took 3.394645ms to run NodePressure ...
	I0103 20:18:14.113853   62015 start.go:228] waiting for startup goroutines ...
	I0103 20:18:14.113859   62015 start.go:233] waiting for cluster config update ...
	I0103 20:18:14.113868   62015 start.go:242] writing updated cluster config ...
	I0103 20:18:14.114102   62015 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:14.163090   62015 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0103 20:18:14.165173   62015 out.go:177] * Done! kubectl is now configured to use "no-preload-749210" cluster and "default" namespace by default
	I0103 20:18:10.896026   62050 pod_ready.go:81] duration metric: took 4m0.007814497s waiting for pod "metrics-server-57f55c9bc5-pgbbj" in "kube-system" namespace to be "Ready" ...
	E0103 20:18:10.896053   62050 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0103 20:18:10.896062   62050 pod_ready.go:38] duration metric: took 4m4.550955933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:10.896076   62050 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:18:10.896109   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:10.896169   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:10.965458   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:10.965485   62050 cri.go:89] found id: ""
	I0103 20:18:10.965494   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:10.965552   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:10.970818   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:10.970890   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:11.014481   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.014509   62050 cri.go:89] found id: ""
	I0103 20:18:11.014537   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:11.014602   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.019157   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:11.019220   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:11.068101   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:11.068129   62050 cri.go:89] found id: ""
	I0103 20:18:11.068138   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:11.068202   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.075018   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:11.075098   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:11.122838   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.122862   62050 cri.go:89] found id: ""
	I0103 20:18:11.122871   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:11.122925   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.128488   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:11.128563   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:11.178133   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:11.178160   62050 cri.go:89] found id: ""
	I0103 20:18:11.178170   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:11.178233   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.182823   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:11.182900   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:11.229175   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.229207   62050 cri.go:89] found id: ""
	I0103 20:18:11.229218   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:11.229271   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.238617   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:11.238686   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:11.289070   62050 cri.go:89] found id: ""
	I0103 20:18:11.289107   62050 logs.go:284] 0 containers: []
	W0103 20:18:11.289115   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:11.289121   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:11.289204   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:11.333342   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.333365   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.333370   62050 cri.go:89] found id: ""
	I0103 20:18:11.333376   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:11.333430   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.338236   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:11.342643   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:11.342668   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:11.395443   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:11.395471   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:11.561224   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:11.561258   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:11.619642   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:11.619677   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:11.656329   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:11.656370   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:11.710651   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:11.710685   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:11.756839   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:11.756866   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:11.791885   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:11.791920   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:11.805161   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:11.805185   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:12.261916   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:12.261973   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:12.316486   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:12.316525   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:12.367998   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:12.368032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:12.404277   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:12.404316   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:14.943727   62050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:18:14.959322   62050 api_server.go:72] duration metric: took 4m14.593955756s to wait for apiserver process to appear ...
	I0103 20:18:14.959344   62050 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:18:14.959384   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:14.959443   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:15.001580   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.001613   62050 cri.go:89] found id: ""
	I0103 20:18:15.001624   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:15.001688   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.005964   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:15.006044   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:15.043364   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:15.043393   62050 cri.go:89] found id: ""
	I0103 20:18:15.043403   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:15.043461   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.047226   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:15.047291   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:15.091700   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.091727   62050 cri.go:89] found id: ""
	I0103 20:18:15.091736   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:15.091794   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.095953   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:15.096038   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:15.132757   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:15.132785   62050 cri.go:89] found id: ""
	I0103 20:18:15.132796   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:15.132856   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.137574   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:15.137637   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:15.174799   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:15.174827   62050 cri.go:89] found id: ""
	I0103 20:18:15.174836   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:15.174893   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.179052   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:15.179119   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:15.218730   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:15.218761   62050 cri.go:89] found id: ""
	I0103 20:18:15.218770   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:15.218829   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.222730   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:15.222796   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:15.265020   62050 cri.go:89] found id: ""
	I0103 20:18:15.265046   62050 logs.go:284] 0 containers: []
	W0103 20:18:15.265053   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:15.265059   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:15.265122   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:15.307032   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:15.307059   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.307065   62050 cri.go:89] found id: ""
	I0103 20:18:15.307074   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:15.307132   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.311275   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:15.315089   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:15.315113   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:15.361815   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:15.361840   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:15.493913   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:15.493947   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:15.553841   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:15.553881   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:15.590885   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:15.590911   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:15.630332   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:15.630357   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:16.074625   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:16.074659   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:16.133116   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:16.133161   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:16.147559   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:16.147585   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:16.199131   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:16.199167   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:16.238085   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:16.238116   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:16.294992   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:16.295032   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:16.333862   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:16.333896   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:18.875707   62050 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8444/healthz ...
	I0103 20:18:18.882546   62050 api_server.go:279] https://192.168.39.139:8444/healthz returned 200:
	ok
	I0103 20:18:18.884633   62050 api_server.go:141] control plane version: v1.28.4
	I0103 20:18:18.884662   62050 api_server.go:131] duration metric: took 3.925311693s to wait for apiserver health ...
	I0103 20:18:18.884672   62050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:18:18.884701   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 20:18:18.884765   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 20:18:18.922149   62050 cri.go:89] found id: "ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:18.922170   62050 cri.go:89] found id: ""
	I0103 20:18:18.922177   62050 logs.go:284] 1 containers: [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc]
	I0103 20:18:18.922223   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.926886   62050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 20:18:18.926952   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 20:18:18.970009   62050 cri.go:89] found id: "3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:18.970035   62050 cri.go:89] found id: ""
	I0103 20:18:18.970043   62050 logs.go:284] 1 containers: [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d]
	I0103 20:18:18.970107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:18.974349   62050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 20:18:18.974413   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 20:18:19.016970   62050 cri.go:89] found id: "e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:19.016994   62050 cri.go:89] found id: ""
	I0103 20:18:19.017004   62050 logs.go:284] 1 containers: [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06]
	I0103 20:18:19.017069   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.021899   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 20:18:19.021959   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 20:18:19.076044   62050 cri.go:89] found id: "abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:19.076074   62050 cri.go:89] found id: ""
	I0103 20:18:19.076081   62050 logs.go:284] 1 containers: [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c]
	I0103 20:18:19.076134   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.081699   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 20:18:19.081775   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 20:18:19.120022   62050 cri.go:89] found id: "b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:19.120046   62050 cri.go:89] found id: ""
	I0103 20:18:19.120053   62050 logs.go:284] 1 containers: [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032]
	I0103 20:18:19.120107   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.124627   62050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 20:18:19.124698   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 20:18:19.165431   62050 cri.go:89] found id: "2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:19.165453   62050 cri.go:89] found id: ""
	I0103 20:18:19.165463   62050 logs.go:284] 1 containers: [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b]
	I0103 20:18:19.165513   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.170214   62050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 20:18:19.170282   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 20:18:19.208676   62050 cri.go:89] found id: ""
	I0103 20:18:19.208706   62050 logs.go:284] 0 containers: []
	W0103 20:18:19.208716   62050 logs.go:286] No container was found matching "kindnet"
	I0103 20:18:19.208724   62050 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0103 20:18:19.208782   62050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0103 20:18:19.246065   62050 cri.go:89] found id: "3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:19.246092   62050 cri.go:89] found id: "365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:19.246101   62050 cri.go:89] found id: ""
	I0103 20:18:19.246109   62050 logs.go:284] 2 containers: [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f]
	I0103 20:18:19.246169   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.250217   62050 ssh_runner.go:195] Run: which crictl
	I0103 20:18:19.259598   62050 logs.go:123] Gathering logs for CRI-O ...
	I0103 20:18:19.259628   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 20:18:19.643718   62050 logs.go:123] Gathering logs for kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] ...
	I0103 20:18:19.643755   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc"
	I0103 20:18:19.697873   62050 logs.go:123] Gathering logs for etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] ...
	I0103 20:18:19.697905   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d"
	I0103 20:18:19.762995   62050 logs.go:123] Gathering logs for kubelet ...
	I0103 20:18:19.763030   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 20:18:19.830835   62050 logs.go:123] Gathering logs for describe nodes ...
	I0103 20:18:19.830871   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 20:18:19.969465   62050 logs.go:123] Gathering logs for kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] ...
	I0103 20:18:19.969501   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032"
	I0103 20:18:20.011269   62050 logs.go:123] Gathering logs for kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] ...
	I0103 20:18:20.011301   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b"
	I0103 20:18:20.059317   62050 logs.go:123] Gathering logs for storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] ...
	I0103 20:18:20.059352   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a"
	I0103 20:18:20.099428   62050 logs.go:123] Gathering logs for storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] ...
	I0103 20:18:20.099455   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f"
	I0103 20:18:20.135773   62050 logs.go:123] Gathering logs for dmesg ...
	I0103 20:18:20.135809   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 20:18:20.149611   62050 logs.go:123] Gathering logs for kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] ...
	I0103 20:18:20.149649   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c"
	I0103 20:18:20.190742   62050 logs.go:123] Gathering logs for container status ...
	I0103 20:18:20.190788   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 20:18:20.241115   62050 logs.go:123] Gathering logs for coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] ...
	I0103 20:18:20.241142   62050 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06"
	I0103 20:18:22.789475   62050 system_pods.go:59] 8 kube-system pods found
	I0103 20:18:22.789502   62050 system_pods.go:61] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.789507   62050 system_pods.go:61] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.789512   62050 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.789516   62050 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.789520   62050 system_pods.go:61] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.789527   62050 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.789533   62050 system_pods.go:61] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.789538   62050 system_pods.go:61] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.789544   62050 system_pods.go:74] duration metric: took 3.904866616s to wait for pod list to return data ...
	I0103 20:18:22.789551   62050 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:18:22.791976   62050 default_sa.go:45] found service account: "default"
	I0103 20:18:22.792000   62050 default_sa.go:55] duration metric: took 2.444229ms for default service account to be created ...
	I0103 20:18:22.792007   62050 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:18:22.797165   62050 system_pods.go:86] 8 kube-system pods found
	I0103 20:18:22.797186   62050 system_pods.go:89] "coredns-5dd5756b68-zxzqg" [d066762e-7e1f-4b3a-9b21-6a7a3ca53edd] Running
	I0103 20:18:22.797192   62050 system_pods.go:89] "etcd-default-k8s-diff-port-018788" [c0023ec6-ae61-4532-840e-287e9945f4ec] Running
	I0103 20:18:22.797196   62050 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018788" [bba03f36-cef8-4e19-adc5-1a65756bdf1c] Running
	I0103 20:18:22.797200   62050 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018788" [baf7a3c2-3573-4977-be30-d63e4df2de22] Running
	I0103 20:18:22.797204   62050 system_pods.go:89] "kube-proxy-wqjlv" [de5a1b04-4bce-4111-bfe8-2adb2f947d78] Running
	I0103 20:18:22.797209   62050 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018788" [cdc74e5c-0085-49ae-9471-fce52a1a6b2f] Running
	I0103 20:18:22.797221   62050 system_pods.go:89] "metrics-server-57f55c9bc5-pgbbj" [ee3963d9-1627-4e78-91e5-1f92c2011f4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 20:18:22.797234   62050 system_pods.go:89] "storage-provisioner" [ef3511cb-5587-4ea5-86b6-d52cc5afb226] Running
	I0103 20:18:22.797244   62050 system_pods.go:126] duration metric: took 5.231578ms to wait for k8s-apps to be running ...
	I0103 20:18:22.797256   62050 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:22.797303   62050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:22.811467   62050 system_svc.go:56] duration metric: took 14.201511ms WaitForService to wait for kubelet.
	I0103 20:18:22.811503   62050 kubeadm.go:581] duration metric: took 4m22.446143128s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:22.811533   62050 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:22.814594   62050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:18:22.814617   62050 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:22.814629   62050 node_conditions.go:105] duration metric: took 3.089727ms to run NodePressure ...
	I0103 20:18:22.814639   62050 start.go:228] waiting for startup goroutines ...
	I0103 20:18:22.814645   62050 start.go:233] waiting for cluster config update ...
	I0103 20:18:22.814654   62050 start.go:242] writing updated cluster config ...
	I0103 20:18:22.814897   62050 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:22.864761   62050 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:18:22.866755   62050 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018788" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 20:13:42 UTC, ends at Wed 2024-01-03 20:32:16 UTC. --
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.508416864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313936508402983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=43ec6def-5d22-4787-af6e-00322d6340cc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.509616746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9b3a47b9-4c0b-4a3f-8a14-5ad6146470cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.509667296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9b3a47b9-4c0b-4a3f-8a14-5ad6146470cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.509842576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:320393ddb07553eb44a54c112d172ce04185d7ac58e27c5d44217b4711153907,PodSandboxId:7c321163110595ffe03bfd0c93467e79648b641fbd7ffaf14461512cc89dba61,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312866461634949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec52ba5e-d926-4b8f-abb8-0381cf3f985a,},Annotations:map[string]string{io.kubernetes.container.hash: d91788b,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45917899cfd41ece572d722e8d76510aa569a5b9a80e7899d35c3844125855b6,PodSandboxId:d34f9861cf860e8552cc8b0f865e95e6c7acda606aa03eb31e00ebd5afb34591,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704312863835091625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nvbsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22884cc1-f360-4ee8-bafc-340bb24faa41,},Annotations:map[string]string{io.kubernetes.container.hash: 1b28c3cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7169f167164d608b443918e6d53248d93a1f5d91d15c4db2f35a6bc93ee1be3,PodSandboxId:e6ed96711a089716a954eb12c0f266dc158499cd4ba9a4d239004e387003ed42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312862364601363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 4157ff41-1b3b-4eb7-b23b-2de69398161c,},Annotations:map[string]string{io.kubernetes.container.hash: 70e97194,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a196e4fc88e5e12ebea815c63f5444bdf901c0f88e5e48f515af4a095def802,PodSandboxId:2eb19fa47dc53b41e9c56d34b8d9a4400c037efadaca09b0a7544baf9a66b148,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704312861740798805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jk7jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef720f69-1bfd-4e75-9943-
ff7ee3145ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 8a94f92b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a40bb274f500d3acbfd95cef5b55e0ea95441522e180afffcc40eaf2605db1,PodSandboxId:f8dee6e4f3ff62e9f966be9cabc065cb086203b28be0cc63887f0dcd958af645,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704312854877894597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe1bb94b97e48f63d9431bddbebf185,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe931f92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82afd69651caaa0dee810c76dd80ddd78630b9ffab8e30e5edd67a82dba78b7,PodSandboxId:c0bdb285cbdce3946787cdb8ae3cf14bda0957ddc972b254fccbfeffac7e06b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704312853747571570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations
:map[string]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8babac0762b1da3e7fc5037f5d7cf07ab1bf456ae68951526a6123c7249f18c,PodSandboxId:0970fde04b7f743edc8b79467f4d1b419ace87ff650728f2b5bccbeede0a9e90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704312853609423470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string
]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40cdf59c968e44473516fdcc829b115c30ac1c817dafebc6dcf8b22fe28171b3,PodSandboxId:4d52f9a6f958830d7b7944f26eafee1430b4f6e21c49fa231e958d49f1e5135c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704312853354886697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12703be281c3cbcafa1a958acc881c41,},Annotations:map[string]string{io.
kubernetes.container.hash: 95ed9a70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9b3a47b9-4c0b-4a3f-8a14-5ad6146470cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.552694631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=786bda16-c046-47de-b553-fa07fc2ce4a1 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.552790316Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=786bda16-c046-47de-b553-fa07fc2ce4a1 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.554633609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dc4de95d-d70a-44d2-9133-f605fcce9009 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.555299078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313936555277676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=dc4de95d-d70a-44d2-9133-f605fcce9009 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.555998085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fa11b9ca-a30a-4a3e-bfa4-6e02cae16b1f name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.556088513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fa11b9ca-a30a-4a3e-bfa4-6e02cae16b1f name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.556324726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:320393ddb07553eb44a54c112d172ce04185d7ac58e27c5d44217b4711153907,PodSandboxId:7c321163110595ffe03bfd0c93467e79648b641fbd7ffaf14461512cc89dba61,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312866461634949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec52ba5e-d926-4b8f-abb8-0381cf3f985a,},Annotations:map[string]string{io.kubernetes.container.hash: d91788b,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45917899cfd41ece572d722e8d76510aa569a5b9a80e7899d35c3844125855b6,PodSandboxId:d34f9861cf860e8552cc8b0f865e95e6c7acda606aa03eb31e00ebd5afb34591,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704312863835091625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nvbsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22884cc1-f360-4ee8-bafc-340bb24faa41,},Annotations:map[string]string{io.kubernetes.container.hash: 1b28c3cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7169f167164d608b443918e6d53248d93a1f5d91d15c4db2f35a6bc93ee1be3,PodSandboxId:e6ed96711a089716a954eb12c0f266dc158499cd4ba9a4d239004e387003ed42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312862364601363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 4157ff41-1b3b-4eb7-b23b-2de69398161c,},Annotations:map[string]string{io.kubernetes.container.hash: 70e97194,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a196e4fc88e5e12ebea815c63f5444bdf901c0f88e5e48f515af4a095def802,PodSandboxId:2eb19fa47dc53b41e9c56d34b8d9a4400c037efadaca09b0a7544baf9a66b148,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704312861740798805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jk7jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef720f69-1bfd-4e75-9943-
ff7ee3145ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 8a94f92b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a40bb274f500d3acbfd95cef5b55e0ea95441522e180afffcc40eaf2605db1,PodSandboxId:f8dee6e4f3ff62e9f966be9cabc065cb086203b28be0cc63887f0dcd958af645,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704312854877894597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe1bb94b97e48f63d9431bddbebf185,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe931f92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82afd69651caaa0dee810c76dd80ddd78630b9ffab8e30e5edd67a82dba78b7,PodSandboxId:c0bdb285cbdce3946787cdb8ae3cf14bda0957ddc972b254fccbfeffac7e06b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704312853747571570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations
:map[string]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8babac0762b1da3e7fc5037f5d7cf07ab1bf456ae68951526a6123c7249f18c,PodSandboxId:0970fde04b7f743edc8b79467f4d1b419ace87ff650728f2b5bccbeede0a9e90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704312853609423470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string
]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40cdf59c968e44473516fdcc829b115c30ac1c817dafebc6dcf8b22fe28171b3,PodSandboxId:4d52f9a6f958830d7b7944f26eafee1430b4f6e21c49fa231e958d49f1e5135c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704312853354886697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12703be281c3cbcafa1a958acc881c41,},Annotations:map[string]string{io.
kubernetes.container.hash: 95ed9a70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fa11b9ca-a30a-4a3e-bfa4-6e02cae16b1f name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.598007469Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=47668eac-276d-465a-a2ff-cb10509c3027 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.598066445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=47668eac-276d-465a-a2ff-cb10509c3027 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.599658885Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c9268b1a-32d1-4c4b-a324-7c5aec858e3d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.600044119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313936600031627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=c9268b1a-32d1-4c4b-a324-7c5aec858e3d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.600993945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1abb41ce-7937-4a3e-a401-129bb6ed49db name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.601040246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1abb41ce-7937-4a3e-a401-129bb6ed49db name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.601211452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:320393ddb07553eb44a54c112d172ce04185d7ac58e27c5d44217b4711153907,PodSandboxId:7c321163110595ffe03bfd0c93467e79648b641fbd7ffaf14461512cc89dba61,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312866461634949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec52ba5e-d926-4b8f-abb8-0381cf3f985a,},Annotations:map[string]string{io.kubernetes.container.hash: d91788b,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45917899cfd41ece572d722e8d76510aa569a5b9a80e7899d35c3844125855b6,PodSandboxId:d34f9861cf860e8552cc8b0f865e95e6c7acda606aa03eb31e00ebd5afb34591,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704312863835091625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nvbsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22884cc1-f360-4ee8-bafc-340bb24faa41,},Annotations:map[string]string{io.kubernetes.container.hash: 1b28c3cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7169f167164d608b443918e6d53248d93a1f5d91d15c4db2f35a6bc93ee1be3,PodSandboxId:e6ed96711a089716a954eb12c0f266dc158499cd4ba9a4d239004e387003ed42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312862364601363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 4157ff41-1b3b-4eb7-b23b-2de69398161c,},Annotations:map[string]string{io.kubernetes.container.hash: 70e97194,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a196e4fc88e5e12ebea815c63f5444bdf901c0f88e5e48f515af4a095def802,PodSandboxId:2eb19fa47dc53b41e9c56d34b8d9a4400c037efadaca09b0a7544baf9a66b148,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704312861740798805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jk7jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef720f69-1bfd-4e75-9943-
ff7ee3145ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 8a94f92b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a40bb274f500d3acbfd95cef5b55e0ea95441522e180afffcc40eaf2605db1,PodSandboxId:f8dee6e4f3ff62e9f966be9cabc065cb086203b28be0cc63887f0dcd958af645,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704312854877894597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe1bb94b97e48f63d9431bddbebf185,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe931f92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82afd69651caaa0dee810c76dd80ddd78630b9ffab8e30e5edd67a82dba78b7,PodSandboxId:c0bdb285cbdce3946787cdb8ae3cf14bda0957ddc972b254fccbfeffac7e06b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704312853747571570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations
:map[string]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8babac0762b1da3e7fc5037f5d7cf07ab1bf456ae68951526a6123c7249f18c,PodSandboxId:0970fde04b7f743edc8b79467f4d1b419ace87ff650728f2b5bccbeede0a9e90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704312853609423470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string
]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40cdf59c968e44473516fdcc829b115c30ac1c817dafebc6dcf8b22fe28171b3,PodSandboxId:4d52f9a6f958830d7b7944f26eafee1430b4f6e21c49fa231e958d49f1e5135c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704312853354886697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12703be281c3cbcafa1a958acc881c41,},Annotations:map[string]string{io.
kubernetes.container.hash: 95ed9a70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1abb41ce-7937-4a3e-a401-129bb6ed49db name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.638722517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=299a4422-18af-4988-b0f5-c9a02d6150e9 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.638779380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=299a4422-18af-4988-b0f5-c9a02d6150e9 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.640097774Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a69048bb-bb7a-4370-8ebc-5e58369ca44e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.640547913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313936640531495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=a69048bb-bb7a-4370-8ebc-5e58369ca44e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.641089799Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dad62bbd-5684-4dc6-a11c-ec178b24de9a name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.641142174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dad62bbd-5684-4dc6-a11c-ec178b24de9a name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:32:16 old-k8s-version-927922 crio[717]: time="2024-01-03 20:32:16.641310028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:320393ddb07553eb44a54c112d172ce04185d7ac58e27c5d44217b4711153907,PodSandboxId:7c321163110595ffe03bfd0c93467e79648b641fbd7ffaf14461512cc89dba61,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312866461634949,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec52ba5e-d926-4b8f-abb8-0381cf3f985a,},Annotations:map[string]string{io.kubernetes.container.hash: d91788b,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45917899cfd41ece572d722e8d76510aa569a5b9a80e7899d35c3844125855b6,PodSandboxId:d34f9861cf860e8552cc8b0f865e95e6c7acda606aa03eb31e00ebd5afb34591,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704312863835091625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nvbsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22884cc1-f360-4ee8-bafc-340bb24faa41,},Annotations:map[string]string{io.kubernetes.container.hash: 1b28c3cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7169f167164d608b443918e6d53248d93a1f5d91d15c4db2f35a6bc93ee1be3,PodSandboxId:e6ed96711a089716a954eb12c0f266dc158499cd4ba9a4d239004e387003ed42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312862364601363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 4157ff41-1b3b-4eb7-b23b-2de69398161c,},Annotations:map[string]string{io.kubernetes.container.hash: 70e97194,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a196e4fc88e5e12ebea815c63f5444bdf901c0f88e5e48f515af4a095def802,PodSandboxId:2eb19fa47dc53b41e9c56d34b8d9a4400c037efadaca09b0a7544baf9a66b148,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704312861740798805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jk7jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef720f69-1bfd-4e75-9943-
ff7ee3145ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 8a94f92b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a40bb274f500d3acbfd95cef5b55e0ea95441522e180afffcc40eaf2605db1,PodSandboxId:f8dee6e4f3ff62e9f966be9cabc065cb086203b28be0cc63887f0dcd958af645,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704312854877894597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe1bb94b97e48f63d9431bddbebf185,},Annotations:map[string]string{io.kub
ernetes.container.hash: fe931f92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82afd69651caaa0dee810c76dd80ddd78630b9ffab8e30e5edd67a82dba78b7,PodSandboxId:c0bdb285cbdce3946787cdb8ae3cf14bda0957ddc972b254fccbfeffac7e06b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704312853747571570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations
:map[string]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8babac0762b1da3e7fc5037f5d7cf07ab1bf456ae68951526a6123c7249f18c,PodSandboxId:0970fde04b7f743edc8b79467f4d1b419ace87ff650728f2b5bccbeede0a9e90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704312853609423470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string
]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40cdf59c968e44473516fdcc829b115c30ac1c817dafebc6dcf8b22fe28171b3,PodSandboxId:4d52f9a6f958830d7b7944f26eafee1430b4f6e21c49fa231e958d49f1e5135c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704312853354886697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-927922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12703be281c3cbcafa1a958acc881c41,},Annotations:map[string]string{io.
kubernetes.container.hash: 95ed9a70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dad62bbd-5684-4dc6-a11c-ec178b24de9a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	320393ddb0755       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago      Running             busybox                   0                   7c32116311059       busybox
	45917899cfd41       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      17 minutes ago      Running             coredns                   0                   d34f9861cf860       coredns-5644d7b6d9-nvbsl
	b7169f167164d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       0                   e6ed96711a089       storage-provisioner
	7a196e4fc88e5       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      17 minutes ago      Running             kube-proxy                0                   2eb19fa47dc53       kube-proxy-jk7jw
	c8a40bb274f50       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      18 minutes ago      Running             etcd                      0                   f8dee6e4f3ff6       etcd-old-k8s-version-927922
	a82afd69651ca       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      18 minutes ago      Running             kube-controller-manager   0                   c0bdb285cbdce       kube-controller-manager-old-k8s-version-927922
	f8babac0762b1       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      18 minutes ago      Running             kube-scheduler            0                   0970fde04b7f7       kube-scheduler-old-k8s-version-927922
	40cdf59c968e4       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      18 minutes ago      Running             kube-apiserver            0                   4d52f9a6f9588       kube-apiserver-old-k8s-version-927922
	
	
	==> coredns [45917899cfd41ece572d722e8d76510aa569a5b9a80e7899d35c3844125855b6] <==
	E0103 20:04:34.190923       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0103 20:04:34.190824       1 trace.go:82] Trace[859965114]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-03 20:04:04.190460127 +0000 UTC m=+0.247147512) (total time: 30.000323536s):
	Trace[859965114]: [30.000323536s] [30.000323536s] END
	E0103 20:04:34.191021       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.191021       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.191021       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0103 20:04:34.198057       1 trace.go:82] Trace[1179518053]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-03 20:04:04.189200746 +0000 UTC m=+0.245888159) (total time: 30.008836728s):
	Trace[1179518053]: [30.008836728s] [30.008836728s] END
	E0103 20:04:34.198189       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.198189       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.198189       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	2024-01-03T20:04:34.587Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	2024-01-03T20:04:39.128Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-03T20:14:24.087Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2024-01-03T20:14:24.087Z [INFO] CoreDNS-1.6.2
	2024-01-03T20:14:24.087Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-03T20:14:24.097Z [INFO] 127.0.0.1:58358 - 7510 "HINFO IN 2319616804106500077.1178016545245940769. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009293278s
	E0103 20:04:34.191021       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0103 20:04:34.198189       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-927922
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-927922
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=old-k8s-version-927922
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_03_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:03:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:31:50 +0000   Wed, 03 Jan 2024 20:03:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:31:50 +0000   Wed, 03 Jan 2024 20:03:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:31:50 +0000   Wed, 03 Jan 2024 20:03:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:31:50 +0000   Wed, 03 Jan 2024 20:14:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.12
	  Hostname:    old-k8s-version-927922
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 ce300228261d46a38a32a0015400aff0
	 System UUID:                ce300228-261d-46a3-8a32-a0015400aff0
	 Boot ID:                    3e6c84e4-38e8-4e0b-90ee-ebf292985fe7
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                coredns-5644d7b6d9-nvbsl                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                etcd-old-k8s-version-927922                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-apiserver-old-k8s-version-927922             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-controller-manager-old-k8s-version-927922    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                kube-proxy-jk7jw                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-scheduler-old-k8s-version-927922             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                metrics-server-74d5856cc6-kqzhm                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x7 over 28m)  kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x8 over 28m)  kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kube-proxy, old-k8s-version-927922  Starting kube-proxy.
	  Normal  Starting                 18m                kubelet, old-k8s-version-927922     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet, old-k8s-version-927922     Node old-k8s-version-927922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet, old-k8s-version-927922     Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kube-proxy, old-k8s-version-927922  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 3 20:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070593] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.548648] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.804157] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153618] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.406103] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.023217] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.175537] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.214652] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.168851] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.235349] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[Jan 3 20:14] systemd-fstab-generator[1032]: Ignoring "noauto" for root device
	[  +0.420984] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +23.762124] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [c8a40bb274f500d3acbfd95cef5b55e0ea95441522e180afffcc40eaf2605db1] <==
	2024-01-03 20:14:15.004373 I | etcdserver: restarting member ab05bc745795456d in cluster 800e3fcdc6b6742c at commit index 538
	2024-01-03 20:14:15.004563 I | raft: ab05bc745795456d became follower at term 2
	2024-01-03 20:14:15.004647 I | raft: newRaft ab05bc745795456d [peers: [], term: 2, commit: 538, applied: 0, lastindex: 538, lastterm: 2]
	2024-01-03 20:14:15.017386 W | auth: simple token is not cryptographically signed
	2024-01-03 20:14:15.020294 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-03 20:14:15.021797 I | etcdserver/membership: added member ab05bc745795456d [https://192.168.72.12:2380] to cluster 800e3fcdc6b6742c
	2024-01-03 20:14:15.021927 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-03 20:14:15.021970 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-03 20:14:15.027235 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-03 20:14:15.027750 I | embed: listening for metrics on http://192.168.72.12:2381
	2024-01-03 20:14:15.027825 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-03 20:14:16.305389 I | raft: ab05bc745795456d is starting a new election at term 2
	2024-01-03 20:14:16.305546 I | raft: ab05bc745795456d became candidate at term 3
	2024-01-03 20:14:16.305575 I | raft: ab05bc745795456d received MsgVoteResp from ab05bc745795456d at term 3
	2024-01-03 20:14:16.305597 I | raft: ab05bc745795456d became leader at term 3
	2024-01-03 20:14:16.305614 I | raft: raft.node: ab05bc745795456d elected leader ab05bc745795456d at term 3
	2024-01-03 20:14:16.305927 I | etcdserver: published {Name:old-k8s-version-927922 ClientURLs:[https://192.168.72.12:2379]} to cluster 800e3fcdc6b6742c
	2024-01-03 20:14:16.306261 I | embed: ready to serve client requests
	2024-01-03 20:14:16.306510 I | embed: ready to serve client requests
	2024-01-03 20:14:16.307442 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-03 20:14:16.308880 I | embed: serving client requests on 192.168.72.12:2379
	2024-01-03 20:24:16.330871 I | mvcc: store.index: compact 840
	2024-01-03 20:24:16.332791 I | mvcc: finished scheduled compaction at 840 (took 1.526712ms)
	2024-01-03 20:29:16.337594 I | mvcc: store.index: compact 1058
	2024-01-03 20:29:16.339254 I | mvcc: finished scheduled compaction at 1058 (took 1.200213ms)
	
	
	==> kernel <==
	 20:32:17 up 18 min,  0 users,  load average: 0.38, 0.21, 0.12
	Linux old-k8s-version-927922 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [40cdf59c968e44473516fdcc829b115c30ac1c817dafebc6dcf8b22fe28171b3] <==
	I0103 20:24:20.576326       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:24:20.576764       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:24:20.576880       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:24:20.576987       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:25:20.577307       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:25:20.577409       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:25:20.577514       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:25:20.577528       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:27:20.577903       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:27:20.578040       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:27:20.578106       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:27:20.578118       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:29:20.579103       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:29:20.579542       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:29:20.579656       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:29:20.579686       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:30:20.580046       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0103 20:30:20.580139       1 handler_proxy.go:99] no RequestInfo found in the context
	E0103 20:30:20.580169       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:30:20.580176       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a82afd69651caaa0dee810c76dd80ddd78630b9ffab8e30e5edd67a82dba78b7] <==
	E0103 20:26:13.126051       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:26:23.467362       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:26:43.377851       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:26:55.469749       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:27:13.629901       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:27:27.472138       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:27:43.882159       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:27:59.474083       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:28:14.134239       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:28:31.475777       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:28:44.386239       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:29:03.478096       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:29:14.638261       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:29:35.480071       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:29:44.890348       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:30:07.482136       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:30:15.142517       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:30:39.484109       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:30:45.394439       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:31:11.486316       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:31:15.646624       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:31:43.488345       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:31:45.898815       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0103 20:32:15.490721       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0103 20:32:16.151019       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [7a196e4fc88e5e12ebea815c63f5444bdf901c0f88e5e48f515af4a095def802] <==
	W0103 20:04:05.316998       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0103 20:04:05.331408       1 node.go:135] Successfully retrieved node IP: 192.168.72.12
	I0103 20:04:05.331476       1 server_others.go:149] Using iptables Proxier.
	I0103 20:04:05.331887       1 server.go:529] Version: v1.16.0
	I0103 20:04:05.339499       1 config.go:313] Starting service config controller
	I0103 20:04:05.339547       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0103 20:04:05.340541       1 config.go:131] Starting endpoints config controller
	I0103 20:04:05.340587       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0103 20:04:05.441275       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0103 20:04:05.441335       1 shared_informer.go:204] Caches are synced for service config 
	E0103 20:05:21.361533       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=498&timeout=7m2s&timeoutSeconds=422&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.362455       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=499&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	W0103 20:14:21.919840       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0103 20:14:21.930134       1 node.go:135] Successfully retrieved node IP: 192.168.72.12
	I0103 20:14:21.930187       1 server_others.go:149] Using iptables Proxier.
	I0103 20:14:21.930793       1 server.go:529] Version: v1.16.0
	I0103 20:14:21.935276       1 config.go:313] Starting service config controller
	I0103 20:14:21.937684       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0103 20:14:21.935293       1 config.go:131] Starting endpoints config controller
	I0103 20:14:21.938145       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0103 20:14:22.040210       1 shared_informer.go:204] Caches are synced for service config 
	I0103 20:14:22.040416       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [f8babac0762b1da3e7fc5037f5d7cf07ab1bf456ae68951526a6123c7249f18c] <==
	E0103 20:03:43.414176       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 20:03:43.414504       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 20:03:43.415209       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 20:05:21.302955       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=481&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304245       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=351&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304346       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304411       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304470       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=440&timeout=6m40s&timeoutSeconds=400&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304531       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m36s&timeoutSeconds=576&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304581       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=498&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304627       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=8m51s&timeoutSeconds=531&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304696       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.304773       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&resourceVersion=473&timeoutSeconds=443&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	E0103 20:05:21.309721       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp 192.168.72.12:8443: connect: connection refused
	I0103 20:14:14.750765       1 serving.go:319] Generated self-signed cert in-memory
	W0103 20:14:19.571751       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:14:19.573092       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:14:19.573153       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:14:19.573182       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:14:19.585161       1 server.go:143] Version: v1.16.0
	I0103 20:14:19.585383       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0103 20:14:19.596189       1 authorization.go:47] Authorization is disabled
	W0103 20:14:19.596264       1 authentication.go:79] Authentication is disabled
	I0103 20:14:19.596288       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0103 20:14:19.597084       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 20:13:42 UTC, ends at Wed 2024-01-03 20:32:17 UTC. --
	Jan 03 20:27:49 old-k8s-version-927922 kubelet[1038]: E0103 20:27:49.478501    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:28:02 old-k8s-version-927922 kubelet[1038]: E0103 20:28:02.480075    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:28:13 old-k8s-version-927922 kubelet[1038]: E0103 20:28:13.478595    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:28:25 old-k8s-version-927922 kubelet[1038]: E0103 20:28:25.478626    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:28:39 old-k8s-version-927922 kubelet[1038]: E0103 20:28:39.478812    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:28:50 old-k8s-version-927922 kubelet[1038]: E0103 20:28:50.479023    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:29:01 old-k8s-version-927922 kubelet[1038]: E0103 20:29:01.478895    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:29:12 old-k8s-version-927922 kubelet[1038]: E0103 20:29:12.553789    1038 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 03 20:29:16 old-k8s-version-927922 kubelet[1038]: E0103 20:29:16.478915    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:29:28 old-k8s-version-927922 kubelet[1038]: E0103 20:29:28.478599    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:29:40 old-k8s-version-927922 kubelet[1038]: E0103 20:29:40.478264    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:29:51 old-k8s-version-927922 kubelet[1038]: E0103 20:29:51.478559    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:30:06 old-k8s-version-927922 kubelet[1038]: E0103 20:30:06.479048    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:30:21 old-k8s-version-927922 kubelet[1038]: E0103 20:30:21.489320    1038 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 03 20:30:21 old-k8s-version-927922 kubelet[1038]: E0103 20:30:21.489528    1038 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 03 20:30:21 old-k8s-version-927922 kubelet[1038]: E0103 20:30:21.489587    1038 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 03 20:30:21 old-k8s-version-927922 kubelet[1038]: E0103 20:30:21.489628    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 03 20:30:34 old-k8s-version-927922 kubelet[1038]: E0103 20:30:34.479574    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:30:46 old-k8s-version-927922 kubelet[1038]: E0103 20:30:46.484289    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:30:57 old-k8s-version-927922 kubelet[1038]: E0103 20:30:57.478802    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:31:09 old-k8s-version-927922 kubelet[1038]: E0103 20:31:09.479156    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:31:24 old-k8s-version-927922 kubelet[1038]: E0103 20:31:24.479630    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:31:37 old-k8s-version-927922 kubelet[1038]: E0103 20:31:37.478734    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:31:50 old-k8s-version-927922 kubelet[1038]: E0103 20:31:50.479116    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 03 20:32:04 old-k8s-version-927922 kubelet[1038]: E0103 20:32:04.478979    1038 pod_workers.go:191] Error syncing pod 3fd1f766-d011-4591-a332-6d9b50832444 ("metrics-server-74d5856cc6-kqzhm_kube-system(3fd1f766-d011-4591-a332-6d9b50832444)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [b7169f167164d608b443918e6d53248d93a1f5d91d15c4db2f35a6bc93ee1be3] <==
	I0103 20:04:05.743846       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:04:05.765226       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:04:05.765379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:04:05.780549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:04:05.781639       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-927922_40b0ea06-1db7-4d9c-9667-99fc64ff8309!
	I0103 20:04:05.784315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3289e9e-95c1-435d-9042-5a2215b61059", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-927922_40b0ea06-1db7-4d9c-9667-99fc64ff8309 became leader
	I0103 20:04:05.882551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-927922_40b0ea06-1db7-4d9c-9667-99fc64ff8309!
	I0103 20:14:22.467660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:14:22.481894       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:14:22.481982       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:14:39.886080       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:14:39.886293       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-927922_71e43d0a-bca4-4c20-9c43-10ee3df29725!
	I0103 20:14:39.891098       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3289e9e-95c1-435d-9042-5a2215b61059", APIVersion:"v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-927922_71e43d0a-bca4-4c20-9c43-10ee3df29725 became leader
	I0103 20:14:39.987634       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-927922_71e43d0a-bca4-4c20-9c43-10ee3df29725!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-927922 -n old-k8s-version-927922
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-927922 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-kqzhm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-927922 describe pod metrics-server-74d5856cc6-kqzhm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-927922 describe pod metrics-server-74d5856cc6-kqzhm: exit status 1 (71.526109ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-kqzhm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-927922 describe pod metrics-server-74d5856cc6-kqzhm: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (523.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (449.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0103 20:26:42.554604   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-451331 -n embed-certs-451331
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-03 20:34:09.292871108 +0000 UTC m=+5814.765448104
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-451331 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-451331 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.719µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-451331 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451331 -n embed-certs-451331
E0103 20:34:09.452558   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-451331 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-451331 logs -n 25: (1.29801521s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-719541 sudo crio                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-719541                                       | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-350596 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | disable-driver-mounts-350596                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:06 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-927922        | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451331            | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-749210             | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018788  | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-927922             | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC | 03 Jan 24 20:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451331                 | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC | 03 Jan 24 20:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-749210                  | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018788       | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:32 UTC | 03 Jan 24 20:32 UTC |
	| start   | -p newest-cni-195281 --memory=2200 --alsologtostderr   | newest-cni-195281            | jenkins | v1.32.0 | 03 Jan 24 20:32 UTC | 03 Jan 24 20:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC | 03 Jan 24 20:33 UTC |
	| addons  | enable metrics-server -p newest-cni-195281             | newest-cni-195281            | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC | 03 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-195281                                   | newest-cni-195281            | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:32:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:32:19.309136   67249 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:32:19.309476   67249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:32:19.309490   67249 out.go:309] Setting ErrFile to fd 2...
	I0103 20:32:19.309497   67249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:32:19.309714   67249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:32:19.310342   67249 out.go:303] Setting JSON to false
	I0103 20:32:19.311306   67249 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8087,"bootTime":1704305853,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 20:32:19.311373   67249 start.go:138] virtualization: kvm guest
	I0103 20:32:19.314262   67249 out.go:177] * [newest-cni-195281] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 20:32:19.316078   67249 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:32:19.316020   67249 notify.go:220] Checking for updates...
	I0103 20:32:19.318020   67249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:32:19.319745   67249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:32:19.321476   67249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:32:19.323306   67249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 20:32:19.325247   67249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:32:19.327385   67249 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:32:19.327493   67249 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:32:19.327621   67249 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:32:19.327723   67249 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:32:19.368449   67249 out.go:177] * Using the kvm2 driver based on user configuration
	I0103 20:32:19.369981   67249 start.go:298] selected driver: kvm2
	I0103 20:32:19.369999   67249 start.go:902] validating driver "kvm2" against <nil>
	I0103 20:32:19.370010   67249 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:32:19.370814   67249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:32:19.370900   67249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 20:32:19.386697   67249 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 20:32:19.386765   67249 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0103 20:32:19.386794   67249 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0103 20:32:19.387069   67249 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0103 20:32:19.387130   67249 cni.go:84] Creating CNI manager for ""
	I0103 20:32:19.387146   67249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:32:19.387180   67249 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0103 20:32:19.387187   67249 start_flags.go:323] config:
	{Name:newest-cni-195281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:32:19.387359   67249 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:32:19.390156   67249 out.go:177] * Starting control plane node newest-cni-195281 in cluster newest-cni-195281
	I0103 20:32:19.391874   67249 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:32:19.391934   67249 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0103 20:32:19.391952   67249 cache.go:56] Caching tarball of preloaded images
	I0103 20:32:19.392059   67249 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 20:32:19.392071   67249 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0103 20:32:19.392191   67249 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json ...
	I0103 20:32:19.392208   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json: {Name:mk604433cce431aecc704e6ae9cbe8e69956f33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:19.392355   67249 start.go:365] acquiring machines lock for newest-cni-195281: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:32:19.392390   67249 start.go:369] acquired machines lock for "newest-cni-195281" in 22.434µs
	I0103 20:32:19.392407   67249 start.go:93] Provisioning new machine with config: &{Name:newest-cni-195281 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:32:19.392486   67249 start.go:125] createHost starting for "" (driver="kvm2")
	I0103 20:32:19.394467   67249 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0103 20:32:19.394687   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:32:19.394745   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:32:19.410171   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0103 20:32:19.410720   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:32:19.411315   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:32:19.411339   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:32:19.411722   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:32:19.411889   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:19.412083   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:19.412262   67249 start.go:159] libmachine.API.Create for "newest-cni-195281" (driver="kvm2")
	I0103 20:32:19.412296   67249 client.go:168] LocalClient.Create starting
	I0103 20:32:19.412334   67249 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem
	I0103 20:32:19.412371   67249 main.go:141] libmachine: Decoding PEM data...
	I0103 20:32:19.412386   67249 main.go:141] libmachine: Parsing certificate...
	I0103 20:32:19.412440   67249 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem
	I0103 20:32:19.412472   67249 main.go:141] libmachine: Decoding PEM data...
	I0103 20:32:19.412486   67249 main.go:141] libmachine: Parsing certificate...
	I0103 20:32:19.412501   67249 main.go:141] libmachine: Running pre-create checks...
	I0103 20:32:19.412510   67249 main.go:141] libmachine: (newest-cni-195281) Calling .PreCreateCheck
	I0103 20:32:19.412860   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetConfigRaw
	I0103 20:32:19.413237   67249 main.go:141] libmachine: Creating machine...
	I0103 20:32:19.413252   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Create
	I0103 20:32:19.413368   67249 main.go:141] libmachine: (newest-cni-195281) Creating KVM machine...
	I0103 20:32:19.414780   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found existing default KVM network
	I0103 20:32:19.416065   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.415922   67271 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:55:bb} reservation:<nil>}
	I0103 20:32:19.417061   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.416867   67271 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e5:bd:db} reservation:<nil>}
	I0103 20:32:19.417786   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.417674   67271 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ae:17:ed} reservation:<nil>}
	I0103 20:32:19.418963   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.418888   67271 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027f800}
	I0103 20:32:19.425096   67249 main.go:141] libmachine: (newest-cni-195281) DBG | trying to create private KVM network mk-newest-cni-195281 192.168.72.0/24...
	I0103 20:32:19.509409   67249 main.go:141] libmachine: (newest-cni-195281) Setting up store path in /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281 ...
	I0103 20:32:19.509454   67249 main.go:141] libmachine: (newest-cni-195281) DBG | private KVM network mk-newest-cni-195281 192.168.72.0/24 created
	I0103 20:32:19.509473   67249 main.go:141] libmachine: (newest-cni-195281) Building disk image from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 20:32:19.509514   67249 main.go:141] libmachine: (newest-cni-195281) Downloading /home/jenkins/minikube-integration/17885-9609/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0103 20:32:19.509675   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.509290   67271 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:32:19.721072   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.720924   67271 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa...
	I0103 20:32:19.797041   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.796916   67271 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/newest-cni-195281.rawdisk...
	I0103 20:32:19.797066   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Writing magic tar header
	I0103 20:32:19.797080   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Writing SSH key tar header
	I0103 20:32:19.797089   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.797050   67271 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281 ...
	I0103 20:32:19.797185   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281
	I0103 20:32:19.797212   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281 (perms=drwx------)
	I0103 20:32:19.797223   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines (perms=drwxr-xr-x)
	I0103 20:32:19.797237   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines
	I0103 20:32:19.797270   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:32:19.797283   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609
	I0103 20:32:19.797291   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0103 20:32:19.797298   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins
	I0103 20:32:19.797330   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube (perms=drwxr-xr-x)
	I0103 20:32:19.797359   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home
	I0103 20:32:19.797376   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609 (perms=drwxrwxr-x)
	I0103 20:32:19.797390   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Skipping /home - not owner
	I0103 20:32:19.797420   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0103 20:32:19.797443   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0103 20:32:19.797465   67249 main.go:141] libmachine: (newest-cni-195281) Creating domain...
	I0103 20:32:19.798661   67249 main.go:141] libmachine: (newest-cni-195281) define libvirt domain using xml: 
	I0103 20:32:19.798699   67249 main.go:141] libmachine: (newest-cni-195281) <domain type='kvm'>
	I0103 20:32:19.798733   67249 main.go:141] libmachine: (newest-cni-195281)   <name>newest-cni-195281</name>
	I0103 20:32:19.798765   67249 main.go:141] libmachine: (newest-cni-195281)   <memory unit='MiB'>2200</memory>
	I0103 20:32:19.798780   67249 main.go:141] libmachine: (newest-cni-195281)   <vcpu>2</vcpu>
	I0103 20:32:19.798790   67249 main.go:141] libmachine: (newest-cni-195281)   <features>
	I0103 20:32:19.798802   67249 main.go:141] libmachine: (newest-cni-195281)     <acpi/>
	I0103 20:32:19.798814   67249 main.go:141] libmachine: (newest-cni-195281)     <apic/>
	I0103 20:32:19.798826   67249 main.go:141] libmachine: (newest-cni-195281)     <pae/>
	I0103 20:32:19.798836   67249 main.go:141] libmachine: (newest-cni-195281)     
	I0103 20:32:19.798862   67249 main.go:141] libmachine: (newest-cni-195281)   </features>
	I0103 20:32:19.798981   67249 main.go:141] libmachine: (newest-cni-195281)   <cpu mode='host-passthrough'>
	I0103 20:32:19.799017   67249 main.go:141] libmachine: (newest-cni-195281)   
	I0103 20:32:19.799041   67249 main.go:141] libmachine: (newest-cni-195281)   </cpu>
	I0103 20:32:19.799055   67249 main.go:141] libmachine: (newest-cni-195281)   <os>
	I0103 20:32:19.799068   67249 main.go:141] libmachine: (newest-cni-195281)     <type>hvm</type>
	I0103 20:32:19.799083   67249 main.go:141] libmachine: (newest-cni-195281)     <boot dev='cdrom'/>
	I0103 20:32:19.799096   67249 main.go:141] libmachine: (newest-cni-195281)     <boot dev='hd'/>
	I0103 20:32:19.799111   67249 main.go:141] libmachine: (newest-cni-195281)     <bootmenu enable='no'/>
	I0103 20:32:19.799123   67249 main.go:141] libmachine: (newest-cni-195281)   </os>
	I0103 20:32:19.799136   67249 main.go:141] libmachine: (newest-cni-195281)   <devices>
	I0103 20:32:19.799152   67249 main.go:141] libmachine: (newest-cni-195281)     <disk type='file' device='cdrom'>
	I0103 20:32:19.799170   67249 main.go:141] libmachine: (newest-cni-195281)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/boot2docker.iso'/>
	I0103 20:32:19.799186   67249 main.go:141] libmachine: (newest-cni-195281)       <target dev='hdc' bus='scsi'/>
	I0103 20:32:19.799199   67249 main.go:141] libmachine: (newest-cni-195281)       <readonly/>
	I0103 20:32:19.799223   67249 main.go:141] libmachine: (newest-cni-195281)     </disk>
	I0103 20:32:19.799240   67249 main.go:141] libmachine: (newest-cni-195281)     <disk type='file' device='disk'>
	I0103 20:32:19.799264   67249 main.go:141] libmachine: (newest-cni-195281)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0103 20:32:19.799305   67249 main.go:141] libmachine: (newest-cni-195281)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/newest-cni-195281.rawdisk'/>
	I0103 20:32:19.799322   67249 main.go:141] libmachine: (newest-cni-195281)       <target dev='hda' bus='virtio'/>
	I0103 20:32:19.799333   67249 main.go:141] libmachine: (newest-cni-195281)     </disk>
	I0103 20:32:19.799344   67249 main.go:141] libmachine: (newest-cni-195281)     <interface type='network'>
	I0103 20:32:19.799357   67249 main.go:141] libmachine: (newest-cni-195281)       <source network='mk-newest-cni-195281'/>
	I0103 20:32:19.799371   67249 main.go:141] libmachine: (newest-cni-195281)       <model type='virtio'/>
	I0103 20:32:19.799383   67249 main.go:141] libmachine: (newest-cni-195281)     </interface>
	I0103 20:32:19.799397   67249 main.go:141] libmachine: (newest-cni-195281)     <interface type='network'>
	I0103 20:32:19.799409   67249 main.go:141] libmachine: (newest-cni-195281)       <source network='default'/>
	I0103 20:32:19.799423   67249 main.go:141] libmachine: (newest-cni-195281)       <model type='virtio'/>
	I0103 20:32:19.799436   67249 main.go:141] libmachine: (newest-cni-195281)     </interface>
	I0103 20:32:19.799451   67249 main.go:141] libmachine: (newest-cni-195281)     <serial type='pty'>
	I0103 20:32:19.799463   67249 main.go:141] libmachine: (newest-cni-195281)       <target port='0'/>
	I0103 20:32:19.799483   67249 main.go:141] libmachine: (newest-cni-195281)     </serial>
	I0103 20:32:19.799496   67249 main.go:141] libmachine: (newest-cni-195281)     <console type='pty'>
	I0103 20:32:19.799515   67249 main.go:141] libmachine: (newest-cni-195281)       <target type='serial' port='0'/>
	I0103 20:32:19.799534   67249 main.go:141] libmachine: (newest-cni-195281)     </console>
	I0103 20:32:19.799552   67249 main.go:141] libmachine: (newest-cni-195281)     <rng model='virtio'>
	I0103 20:32:19.799565   67249 main.go:141] libmachine: (newest-cni-195281)       <backend model='random'>/dev/random</backend>
	I0103 20:32:19.799580   67249 main.go:141] libmachine: (newest-cni-195281)     </rng>
	I0103 20:32:19.799592   67249 main.go:141] libmachine: (newest-cni-195281)     
	I0103 20:32:19.799605   67249 main.go:141] libmachine: (newest-cni-195281)     
	I0103 20:32:19.799614   67249 main.go:141] libmachine: (newest-cni-195281)   </devices>
	I0103 20:32:19.799626   67249 main.go:141] libmachine: (newest-cni-195281) </domain>
	I0103 20:32:19.799640   67249 main.go:141] libmachine: (newest-cni-195281) 
	I0103 20:32:19.803863   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:21:41:b4 in network default
	I0103 20:32:19.804577   67249 main.go:141] libmachine: (newest-cni-195281) Ensuring networks are active...
	I0103 20:32:19.804622   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:19.805388   67249 main.go:141] libmachine: (newest-cni-195281) Ensuring network default is active
	I0103 20:32:19.805848   67249 main.go:141] libmachine: (newest-cni-195281) Ensuring network mk-newest-cni-195281 is active
	I0103 20:32:19.806341   67249 main.go:141] libmachine: (newest-cni-195281) Getting domain xml...
	I0103 20:32:19.807082   67249 main.go:141] libmachine: (newest-cni-195281) Creating domain...
	I0103 20:32:21.132770   67249 main.go:141] libmachine: (newest-cni-195281) Waiting to get IP...
	I0103 20:32:21.134841   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:21.135341   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:21.135366   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:21.135310   67271 retry.go:31] will retry after 211.135104ms: waiting for machine to come up
	I0103 20:32:21.347666   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:21.348235   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:21.348261   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:21.348145   67271 retry.go:31] will retry after 323.28225ms: waiting for machine to come up
	I0103 20:32:21.672767   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:21.673311   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:21.673343   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:21.673263   67271 retry.go:31] will retry after 371.328166ms: waiting for machine to come up
	I0103 20:32:22.045877   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:22.046594   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:22.046630   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:22.046495   67271 retry.go:31] will retry after 424.478536ms: waiting for machine to come up
	I0103 20:32:22.472185   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:22.472629   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:22.472661   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:22.472550   67271 retry.go:31] will retry after 661.63112ms: waiting for machine to come up
	I0103 20:32:23.135501   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:23.135980   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:23.136011   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:23.135936   67271 retry.go:31] will retry after 627.099478ms: waiting for machine to come up
	I0103 20:32:23.764511   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:23.764964   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:23.764993   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:23.764917   67271 retry.go:31] will retry after 1.023643059s: waiting for machine to come up
	I0103 20:32:24.790457   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:24.791000   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:24.791033   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:24.790947   67271 retry.go:31] will retry after 1.372445622s: waiting for machine to come up
	I0103 20:32:26.165309   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:26.165782   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:26.165801   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:26.165734   67271 retry.go:31] will retry after 1.684754533s: waiting for machine to come up
	I0103 20:32:27.851684   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:27.852122   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:27.852160   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:27.852062   67271 retry.go:31] will retry after 1.693836467s: waiting for machine to come up
	I0103 20:32:29.547539   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:29.548051   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:29.548080   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:29.548006   67271 retry.go:31] will retry after 2.126952355s: waiting for machine to come up
	I0103 20:32:31.676576   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:31.677064   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:31.677093   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:31.677027   67271 retry.go:31] will retry after 3.435892014s: waiting for machine to come up
	I0103 20:32:35.114880   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:35.115371   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:35.115397   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:35.115298   67271 retry.go:31] will retry after 3.914788696s: waiting for machine to come up
	I0103 20:32:39.034444   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:39.034917   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:39.034950   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:39.034872   67271 retry.go:31] will retry after 5.092646295s: waiting for machine to come up
	I0103 20:32:44.131872   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.132395   67249 main.go:141] libmachine: (newest-cni-195281) Found IP for machine: 192.168.72.219
	I0103 20:32:44.132428   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has current primary IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.132441   67249 main.go:141] libmachine: (newest-cni-195281) Reserving static IP address...
	I0103 20:32:44.132922   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find host DHCP lease matching {name: "newest-cni-195281", mac: "52:54:00:5a:49:af", ip: "192.168.72.219"} in network mk-newest-cni-195281
	I0103 20:32:44.216469   67249 main.go:141] libmachine: (newest-cni-195281) Reserved static IP address: 192.168.72.219
	I0103 20:32:44.216511   67249 main.go:141] libmachine: (newest-cni-195281) Waiting for SSH to be available...
	I0103 20:32:44.216522   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Getting to WaitForSSH function...
	I0103 20:32:44.219743   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.220136   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.220181   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.220352   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Using SSH client type: external
	I0103 20:32:44.220382   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa (-rw-------)
	I0103 20:32:44.220427   67249 main.go:141] libmachine: (newest-cni-195281) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.219 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:32:44.220443   67249 main.go:141] libmachine: (newest-cni-195281) DBG | About to run SSH command:
	I0103 20:32:44.220472   67249 main.go:141] libmachine: (newest-cni-195281) DBG | exit 0
	I0103 20:32:44.358552   67249 main.go:141] libmachine: (newest-cni-195281) DBG | SSH cmd err, output: <nil>: 
	I0103 20:32:44.358866   67249 main.go:141] libmachine: (newest-cni-195281) KVM machine creation complete!
	I0103 20:32:44.359216   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetConfigRaw
	I0103 20:32:44.359752   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:44.359969   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:44.360227   67249 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0103 20:32:44.360257   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:32:44.361613   67249 main.go:141] libmachine: Detecting operating system of created instance...
	I0103 20:32:44.361632   67249 main.go:141] libmachine: Waiting for SSH to be available...
	I0103 20:32:44.361641   67249 main.go:141] libmachine: Getting to WaitForSSH function...
	I0103 20:32:44.361656   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.364691   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.365073   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.365109   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.365248   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.365445   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.365680   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.365808   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.365973   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.366604   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.366626   67249 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0103 20:32:44.493837   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:32:44.493867   67249 main.go:141] libmachine: Detecting the provisioner...
	I0103 20:32:44.493880   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.497161   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.497541   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.497601   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.497794   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.498003   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.498199   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.498363   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.498575   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.499018   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.499033   67249 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0103 20:32:44.623686   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0103 20:32:44.623771   67249 main.go:141] libmachine: found compatible host: buildroot
	I0103 20:32:44.623788   67249 main.go:141] libmachine: Provisioning with buildroot...
	I0103 20:32:44.623798   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:44.624047   67249 buildroot.go:166] provisioning hostname "newest-cni-195281"
	I0103 20:32:44.624075   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:44.624251   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.627016   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.627435   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.627469   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.627629   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.627818   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.627970   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.628153   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.628308   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.628628   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.628643   67249 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-195281 && echo "newest-cni-195281" | sudo tee /etc/hostname
	I0103 20:32:44.766387   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-195281
	
	I0103 20:32:44.766419   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.769605   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.770020   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.770063   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.770286   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.770478   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.770696   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.770855   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.771047   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.771391   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.771416   67249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-195281' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-195281/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-195281' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:32:44.906281   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:32:44.906308   67249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:32:44.906343   67249 buildroot.go:174] setting up certificates
	I0103 20:32:44.906354   67249 provision.go:83] configureAuth start
	I0103 20:32:44.906370   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:44.906662   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:44.909425   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.909736   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.909763   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.909936   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.912539   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.913023   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.913051   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.913266   67249 provision.go:138] copyHostCerts
	I0103 20:32:44.913339   67249 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:32:44.913361   67249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:32:44.913448   67249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:32:44.913580   67249 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:32:44.913592   67249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:32:44.913631   67249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:32:44.913722   67249 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:32:44.913732   67249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:32:44.913769   67249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:32:44.913851   67249 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.newest-cni-195281 san=[192.168.72.219 192.168.72.219 localhost 127.0.0.1 minikube newest-cni-195281]
	I0103 20:32:45.098688   67249 provision.go:172] copyRemoteCerts
	I0103 20:32:45.098762   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:32:45.098793   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.101827   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.102181   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.102213   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.102468   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.102706   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.102868   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.103005   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:45.197407   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:32:45.221474   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0103 20:32:45.244138   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:32:45.268222   67249 provision.go:86] duration metric: configureAuth took 361.849849ms
	I0103 20:32:45.268253   67249 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:32:45.268431   67249 config.go:182] Loaded profile config "newest-cni-195281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:32:45.268531   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.271603   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.272110   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.272146   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.272402   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.272676   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.272851   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.273015   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.273229   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:45.273571   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:45.273593   67249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:32:45.615676   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:32:45.615712   67249 main.go:141] libmachine: Checking connection to Docker...
	I0103 20:32:45.615725   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetURL
	I0103 20:32:45.617050   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Using libvirt version 6000000
	I0103 20:32:45.619845   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.620254   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.620287   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.620398   67249 main.go:141] libmachine: Docker is up and running!
	I0103 20:32:45.620418   67249 main.go:141] libmachine: Reticulating splines...
	I0103 20:32:45.620426   67249 client.go:171] LocalClient.Create took 26.208121017s
	I0103 20:32:45.620449   67249 start.go:167] duration metric: libmachine.API.Create for "newest-cni-195281" took 26.208190465s
	I0103 20:32:45.620456   67249 start.go:300] post-start starting for "newest-cni-195281" (driver="kvm2")
	I0103 20:32:45.620467   67249 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:32:45.620488   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.620753   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:32:45.620791   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.623465   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.623873   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.623902   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.624029   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.624213   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.624385   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.624523   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:45.718372   67249 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:32:45.722729   67249 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:32:45.722762   67249 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:32:45.722864   67249 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:32:45.722984   67249 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:32:45.723125   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:32:45.733617   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:32:45.757682   67249 start.go:303] post-start completed in 137.211001ms
	I0103 20:32:45.757749   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetConfigRaw
	I0103 20:32:45.758396   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:45.761402   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.761798   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.761832   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.762088   67249 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json ...
	I0103 20:32:45.762302   67249 start.go:128] duration metric: createHost completed in 26.369804551s
	I0103 20:32:45.762332   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.764911   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.765288   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.765321   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.765500   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.765694   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.765902   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.766060   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.766292   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:45.766620   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:45.766632   67249 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:32:45.895678   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704313965.882309318
	
	I0103 20:32:45.895711   67249 fix.go:206] guest clock: 1704313965.882309318
	I0103 20:32:45.895722   67249 fix.go:219] Guest: 2024-01-03 20:32:45.882309318 +0000 UTC Remote: 2024-01-03 20:32:45.762315613 +0000 UTC m=+26.509941419 (delta=119.993705ms)
	I0103 20:32:45.895748   67249 fix.go:190] guest clock delta is within tolerance: 119.993705ms
	I0103 20:32:45.895770   67249 start.go:83] releasing machines lock for "newest-cni-195281", held for 26.50335784s
	I0103 20:32:45.895801   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.896111   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:45.898979   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.899363   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.899413   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.899560   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.900114   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.900299   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.900417   67249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:32:45.900468   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.900602   67249 ssh_runner.go:195] Run: cat /version.json
	I0103 20:32:45.900633   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.903625   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.903655   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.904059   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.904096   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.904122   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.904142   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.904262   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.904374   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.904453   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.904522   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.904666   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.904708   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.904838   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:45.904893   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:46.030977   67249 ssh_runner.go:195] Run: systemctl --version
	I0103 20:32:46.037034   67249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:32:46.200079   67249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:32:46.206922   67249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:32:46.207016   67249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:32:46.223019   67249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:32:46.223047   67249 start.go:475] detecting cgroup driver to use...
	I0103 20:32:46.223127   67249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:32:46.239996   67249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:32:46.253612   67249 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:32:46.253699   67249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:32:46.267450   67249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:32:46.282771   67249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:32:46.393693   67249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:32:46.526478   67249 docker.go:219] disabling docker service ...
	I0103 20:32:46.526587   67249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:32:46.540410   67249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:32:46.552921   67249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:32:46.683462   67249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:32:46.805351   67249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:32:46.819457   67249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:32:46.836394   67249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:32:46.836464   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.845831   67249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:32:46.845925   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.855232   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.864892   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.873915   67249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:32:46.883629   67249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:32:46.892075   67249 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:32:46.892200   67249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:32:46.904374   67249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:32:46.913766   67249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:32:47.034679   67249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:32:47.216427   67249 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:32:47.216509   67249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:32:47.222160   67249 start.go:543] Will wait 60s for crictl version
	I0103 20:32:47.222235   67249 ssh_runner.go:195] Run: which crictl
	I0103 20:32:47.226110   67249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:32:47.268069   67249 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:32:47.268163   67249 ssh_runner.go:195] Run: crio --version
	I0103 20:32:47.317148   67249 ssh_runner.go:195] Run: crio --version
	I0103 20:32:47.365121   67249 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0103 20:32:47.366551   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:47.369708   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:47.369977   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:47.369997   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:47.370262   67249 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0103 20:32:47.374478   67249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:32:47.388565   67249 localpath.go:92] copying /home/jenkins/minikube-integration/17885-9609/.minikube/client.crt -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/client.crt
	I0103 20:32:47.388746   67249 localpath.go:117] copying /home/jenkins/minikube-integration/17885-9609/.minikube/client.key -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/client.key
	I0103 20:32:47.390765   67249 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0103 20:32:47.392153   67249 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:32:47.392217   67249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:32:47.427843   67249 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0103 20:32:47.427922   67249 ssh_runner.go:195] Run: which lz4
	I0103 20:32:47.431931   67249 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:32:47.436174   67249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:32:47.436209   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401795125 bytes)
	I0103 20:32:48.886506   67249 crio.go:444] Took 1.454620 seconds to copy over tarball
	I0103 20:32:48.886605   67249 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:32:51.425832   67249 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.539199724s)
	I0103 20:32:51.425868   67249 crio.go:451] Took 2.539326 seconds to extract the tarball
	I0103 20:32:51.425880   67249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:32:51.463537   67249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:32:51.542489   67249 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:32:51.542535   67249 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:32:51.542644   67249 ssh_runner.go:195] Run: crio config
	I0103 20:32:51.604708   67249 cni.go:84] Creating CNI manager for ""
	I0103 20:32:51.604736   67249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:32:51.604756   67249 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0103 20:32:51.604774   67249 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.219 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-195281 NodeName:newest-cni-195281 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:32:51.604921   67249 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-195281"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:32:51.604998   67249 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-195281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:32:51.605063   67249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 20:32:51.614067   67249 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:32:51.614138   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:32:51.622881   67249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0103 20:32:51.639844   67249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 20:32:51.657148   67249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0103 20:32:51.673717   67249 ssh_runner.go:195] Run: grep 192.168.72.219	control-plane.minikube.internal$ /etc/hosts
	I0103 20:32:51.677731   67249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.219	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:32:51.691172   67249 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281 for IP: 192.168.72.219
	I0103 20:32:51.691216   67249 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:51.691406   67249 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:32:51.691466   67249 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:32:51.691555   67249 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/client.key
	I0103 20:32:51.691578   67249 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840
	I0103 20:32:51.691591   67249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840 with IP's: [192.168.72.219 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 20:32:51.819513   67249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840 ...
	I0103 20:32:51.819543   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840: {Name:mke6310b8f3a7f62097b99eb3014efd0dc20eee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:51.819753   67249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840 ...
	I0103 20:32:51.819775   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840: {Name:mk86f84e3544818fe75547ad73b8572d5ea7d5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:51.819889   67249 certs.go:337] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840 -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt
	I0103 20:32:51.819951   67249 certs.go:341] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840 -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key
	I0103 20:32:51.819998   67249 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key
	I0103 20:32:51.820011   67249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt with IP's: []
	I0103 20:32:52.091348   67249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt ...
	I0103 20:32:52.091389   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt: {Name:mk0bd3b5025560ca11106a8bacced64f41bc0bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:52.091598   67249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key ...
	I0103 20:32:52.091624   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key: {Name:mkb6394b7df36e99fa2b47f41fee526be70aa354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:52.091875   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:32:52.091916   67249 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:32:52.091924   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:32:52.091945   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:32:52.091968   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:32:52.092005   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:32:52.092084   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:32:52.092677   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:32:52.119326   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:32:52.144246   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:32:52.168845   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:32:52.193428   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:32:52.217391   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:32:52.241585   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:32:52.267288   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:32:52.292564   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:32:52.316091   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:32:52.339271   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:32:52.363053   67249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:32:52.379247   67249 ssh_runner.go:195] Run: openssl version
	I0103 20:32:52.385228   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:32:52.395301   67249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:32:52.400316   67249 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:32:52.400391   67249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:32:52.406648   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:32:52.417403   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:32:52.428037   67249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:32:52.433100   67249 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:32:52.433177   67249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:32:52.439099   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:32:52.449452   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:32:52.460722   67249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:32:52.465623   67249 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:32:52.465683   67249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:32:52.471232   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:32:52.481150   67249 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:32:52.485667   67249 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 20:32:52.485744   67249 kubeadm.go:404] StartCluster: {Name:newest-cni-195281 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:32:52.485826   67249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:32:52.485909   67249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:32:52.531498   67249 cri.go:89] found id: ""
	I0103 20:32:52.531561   67249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:32:52.540939   67249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:32:52.550366   67249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:32:52.561098   67249 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:32:52.561141   67249 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0103 20:32:52.688110   67249 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0103 20:32:52.688227   67249 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 20:32:52.982436   67249 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 20:32:52.982649   67249 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 20:32:52.982759   67249 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 20:32:53.224308   67249 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 20:32:53.374760   67249 out.go:204]   - Generating certificates and keys ...
	I0103 20:32:53.374889   67249 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 20:32:53.374992   67249 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 20:32:53.375097   67249 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 20:32:53.441111   67249 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 20:32:53.628208   67249 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 20:32:53.797130   67249 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 20:32:53.952777   67249 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 20:32:53.953156   67249 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-195281] and IPs [192.168.72.219 127.0.0.1 ::1]
	I0103 20:32:54.217335   67249 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 20:32:54.217519   67249 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-195281] and IPs [192.168.72.219 127.0.0.1 ::1]
	I0103 20:32:54.566407   67249 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 20:32:54.711625   67249 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 20:32:54.998510   67249 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 20:32:54.998854   67249 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 20:32:55.388836   67249 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 20:32:55.480482   67249 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0103 20:32:55.693814   67249 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 20:32:55.832458   67249 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 20:32:55.924416   67249 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 20:32:55.925246   67249 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 20:32:55.928467   67249 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 20:32:55.930672   67249 out.go:204]   - Booting up control plane ...
	I0103 20:32:55.930771   67249 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 20:32:55.930840   67249 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 20:32:55.930933   67249 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 20:32:55.948035   67249 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 20:32:55.949287   67249 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 20:32:55.949335   67249 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 20:32:56.085462   67249 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 20:33:04.088972   67249 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003943 seconds
	I0103 20:33:04.109414   67249 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 20:33:04.127616   67249 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 20:33:04.668745   67249 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 20:33:04.668981   67249 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-195281 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 20:33:05.184119   67249 kubeadm.go:322] [bootstrap-token] Using token: 2cn0nj.lvw1854yz02ozc4e
	I0103 20:33:05.185662   67249 out.go:204]   - Configuring RBAC rules ...
	I0103 20:33:05.185785   67249 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 20:33:05.196688   67249 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 20:33:05.205501   67249 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 20:33:05.210178   67249 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 20:33:05.214606   67249 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 20:33:05.219096   67249 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 20:33:05.237231   67249 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 20:33:05.505466   67249 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 20:33:05.634282   67249 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 20:33:05.635368   67249 kubeadm.go:322] 
	I0103 20:33:05.635454   67249 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 20:33:05.635465   67249 kubeadm.go:322] 
	I0103 20:33:05.635574   67249 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 20:33:05.635615   67249 kubeadm.go:322] 
	I0103 20:33:05.635654   67249 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 20:33:05.635737   67249 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 20:33:05.635798   67249 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 20:33:05.635807   67249 kubeadm.go:322] 
	I0103 20:33:05.635897   67249 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0103 20:33:05.635911   67249 kubeadm.go:322] 
	I0103 20:33:05.635966   67249 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 20:33:05.635988   67249 kubeadm.go:322] 
	I0103 20:33:05.636075   67249 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 20:33:05.636163   67249 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 20:33:05.636267   67249 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 20:33:05.636281   67249 kubeadm.go:322] 
	I0103 20:33:05.636386   67249 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 20:33:05.636487   67249 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 20:33:05.636500   67249 kubeadm.go:322] 
	I0103 20:33:05.636618   67249 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2cn0nj.lvw1854yz02ozc4e \
	I0103 20:33:05.636787   67249 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 \
	I0103 20:33:05.636836   67249 kubeadm.go:322] 	--control-plane 
	I0103 20:33:05.636850   67249 kubeadm.go:322] 
	I0103 20:33:05.636969   67249 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 20:33:05.636981   67249 kubeadm.go:322] 
	I0103 20:33:05.637089   67249 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2cn0nj.lvw1854yz02ozc4e \
	I0103 20:33:05.637207   67249 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 
	I0103 20:33:05.637736   67249 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 20:33:05.637759   67249 cni.go:84] Creating CNI manager for ""
	I0103 20:33:05.637766   67249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:33:05.639750   67249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:33:05.641373   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:33:05.691055   67249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:33:05.744358   67249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:33:05.744420   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:05.744430   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=newest-cni-195281 minikube.k8s.io/updated_at=2024_01_03T20_33_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:05.803640   67249 ops.go:34] apiserver oom_adj: -16
	I0103 20:33:06.019502   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:06.520397   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:07.019980   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:07.520416   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:08.019777   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:08.520608   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:09.020553   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:09.520149   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:10.020370   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:10.520393   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:11.020311   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:11.520514   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:12.020199   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:12.519615   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:13.020003   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:13.519798   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:14.020401   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:14.520399   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:15.019786   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:15.520225   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:16.020497   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:16.520261   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:17.019700   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:17.520507   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:17.660212   67249 kubeadm.go:1088] duration metric: took 11.915870696s to wait for elevateKubeSystemPrivileges.
	I0103 20:33:17.660247   67249 kubeadm.go:406] StartCluster complete in 25.174518906s
	I0103 20:33:17.660270   67249 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:33:17.660350   67249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:33:17.662283   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:33:17.662580   67249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:33:17.662668   67249 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:33:17.662773   67249 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-195281"
	I0103 20:33:17.662798   67249 addons.go:237] Setting addon storage-provisioner=true in "newest-cni-195281"
	I0103 20:33:17.662815   67249 config.go:182] Loaded profile config "newest-cni-195281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:33:17.662855   67249 host.go:66] Checking if "newest-cni-195281" exists ...
	I0103 20:33:17.662870   67249 addons.go:69] Setting default-storageclass=true in profile "newest-cni-195281"
	I0103 20:33:17.662885   67249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-195281"
	I0103 20:33:17.663309   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.663352   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.663354   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.663396   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.679378   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0103 20:33:17.679381   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
	I0103 20:33:17.679756   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.679913   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.680300   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.680319   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.680437   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.680465   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.680725   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.680785   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.681141   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.681166   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.681335   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:33:17.684878   67249 addons.go:237] Setting addon default-storageclass=true in "newest-cni-195281"
	I0103 20:33:17.684929   67249 host.go:66] Checking if "newest-cni-195281" exists ...
	I0103 20:33:17.685322   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.685370   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.698698   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37813
	I0103 20:33:17.699206   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.699802   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.699833   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.700253   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.700494   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:33:17.702827   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:33:17.702897   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38757
	I0103 20:33:17.704909   67249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:33:17.703310   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.705444   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.706865   67249 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:33:17.706872   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.706878   67249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:33:17.706894   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:33:17.707346   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.707895   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.707927   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.710637   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:33:17.711043   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:33:17.711079   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:33:17.711194   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:33:17.711332   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:33:17.711429   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:33:17.711599   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:33:17.724354   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41197
	I0103 20:33:17.724813   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.725271   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.725297   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.725645   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.725827   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:33:17.727646   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:33:17.727945   67249 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:33:17.727960   67249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:33:17.727975   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:33:17.730967   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:33:17.731436   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:33:17.731455   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:33:17.731609   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:33:17.731794   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:33:17.731934   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:33:17.732074   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:33:17.863402   67249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 20:33:17.899270   67249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:33:17.911084   67249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:33:18.198358   67249 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-195281" context rescaled to 1 replicas
	I0103 20:33:18.198407   67249 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:33:18.200361   67249 out.go:177] * Verifying Kubernetes components...
	I0103 20:33:18.201742   67249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:33:18.430854   67249 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0103 20:33:18.785127   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.785165   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.785198   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.785223   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.785539   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.785556   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.785568   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.785577   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.786232   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Closing plugin on server side
	I0103 20:33:18.786243   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.786263   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Closing plugin on server side
	I0103 20:33:18.786267   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.786294   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.786310   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.786325   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.786339   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.786621   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.786643   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.786641   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Closing plugin on server side
	I0103 20:33:18.787350   67249 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:33:18.787409   67249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:33:18.809599   67249 api_server.go:72] duration metric: took 611.153897ms to wait for apiserver process to appear ...
	I0103 20:33:18.809631   67249 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:33:18.809654   67249 api_server.go:253] Checking apiserver healthz at https://192.168.72.219:8443/healthz ...
	I0103 20:33:18.815444   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.815470   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.815776   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.815798   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.817627   67249 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 20:33:18.818945   67249 addons.go:508] enable addons completed in 1.156282938s: enabled=[storage-provisioner default-storageclass]
	I0103 20:33:18.824023   67249 api_server.go:279] https://192.168.72.219:8443/healthz returned 200:
	ok
	I0103 20:33:18.826233   67249 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:33:18.826262   67249 api_server.go:131] duration metric: took 16.623947ms to wait for apiserver health ...
	I0103 20:33:18.826273   67249 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:33:18.841277   67249 system_pods.go:59] 8 kube-system pods found
	I0103 20:33:18.841313   67249 system_pods.go:61] "coredns-76f75df574-74kf4" [c77d0e4f-8516-4a88-a37e-741daac7540e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:33:18.841325   67249 system_pods.go:61] "coredns-76f75df574-wxv97" [a316894f-a5ed-4aac-83c0-de2a37c3680f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:33:18.841334   67249 system_pods.go:61] "etcd-newest-cni-195281" [b025aa55-b0ac-48be-8238-5a1d512f4889] Running
	I0103 20:33:18.841340   67249 system_pods.go:61] "kube-apiserver-newest-cni-195281" [15d8768e-a11c-47f5-b820-973868ed880e] Running
	I0103 20:33:18.841346   67249 system_pods.go:61] "kube-controller-manager-newest-cni-195281" [2b9ff8b8-1800-4a98-84f9-0fb99f2a7d75] Running
	I0103 20:33:18.841353   67249 system_pods.go:61] "kube-proxy-m55j5" [d9a647a9-c868-4b74-ab53-88628c2883b1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:33:18.841361   67249 system_pods.go:61] "kube-scheduler-newest-cni-195281" [cdfab88d-73de-4929-b45c-cf517a7d9000] Running
	I0103 20:33:18.841368   67249 system_pods.go:61] "storage-provisioner" [f110f04e-58e2-438f-8db6-615c277d7266] Pending
	I0103 20:33:18.841378   67249 system_pods.go:74] duration metric: took 15.098187ms to wait for pod list to return data ...
	I0103 20:33:18.841392   67249 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:33:18.846938   67249 default_sa.go:45] found service account: "default"
	I0103 20:33:18.846966   67249 default_sa.go:55] duration metric: took 5.564322ms for default service account to be created ...
	I0103 20:33:18.846978   67249 kubeadm.go:581] duration metric: took 648.541157ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0103 20:33:18.846998   67249 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:33:18.850826   67249 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:33:18.850856   67249 node_conditions.go:123] node cpu capacity is 2
	I0103 20:33:18.850868   67249 node_conditions.go:105] duration metric: took 3.865295ms to run NodePressure ...
	I0103 20:33:18.850881   67249 start.go:228] waiting for startup goroutines ...
	I0103 20:33:18.850889   67249 start.go:233] waiting for cluster config update ...
	I0103 20:33:18.850901   67249 start.go:242] writing updated cluster config ...
	I0103 20:33:18.851174   67249 ssh_runner.go:195] Run: rm -f paused
	I0103 20:33:18.906368   67249 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0103 20:33:18.908596   67249 out.go:177] * Done! kubectl is now configured to use "newest-cni-195281" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 20:12:41 UTC, ends at Wed 2024-01-03 20:34:10 UTC. --
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.045422570Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5899d9b99bb80a0595e45a7a5d53017ec4cd2982219645bab2c8d682b07da88b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-sx6gg,Uid:6a4ea161-1a32-4c3b-9a0d-b4c596492d8b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704312803187107535,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-sx6gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4ea161-1a32-4c3b-9a0d-b4c596492d8b,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T20:13:15.024004705Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b651f1b60878ca94ac4fe1055555d60d1750f986c5c3d804b23583d7d7ac9166,Metadata:&PodSandboxMetadata{Name:busybox,Uid:429c2056-bdb7-4ef4-9e0a-1689542c977e,Namespace:default,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1704312803182563584,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 429c2056-bdb7-4ef4-9e0a-1689542c977e,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T20:13:15.023999460Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:04d729c42a462023b951f729e02714b95f29e0b9618d4a369983ca32483cd82c,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-sm8rb,Uid:12b9f83d-abf8-431c-a271-b8489d32f0de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704312799122210960,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-sm8rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b9f83d-abf8-431c-a271-b8489d32f0de,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T20:13:15.
023995865Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed76d9d3acd8a38a86208b4ddf1aa6c578e079c645aa6a9cdb5cba5f2a036ad0,Metadata:&PodSandboxMetadata{Name:kube-proxy-fsnb9,Uid:d1f00cf1-e9c4-442b-a6b3-b633252b840c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704312795661847777,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fsnb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f00cf1-e9c4-442b-a6b3-b633252b840c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-03T20:13:15.023993528Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cbce49e7-cef5-40a1-a017-906fcc77ef66,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704312795363118681,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-cef5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-01-03T20:13:15.023997746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:347a463a5517897350359189bfcd8196e5a4353788e5cdf70557feac357e76c5,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-451331,Uid:cb324b9ebe7e80d000d3e5358d033c1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704312788561033307,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb324b9ebe7e80d000d3e5358d033c1a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.197:2379,kubernetes.io/config.hash: cb324b9ebe7e80d000d3e5358d033c1a,kubernetes.io/config.seen: 2024-01-03T20:13:08.010858345Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc7a4a9b7f40330f15b6beedc9ce4706823549eed5d11ada2261689174c6f633,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-451
331,Uid:b202e71ceb565a3c0d5e1a29eff74660,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704312788543843916,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b202e71ceb565a3c0d5e1a29eff74660,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b202e71ceb565a3c0d5e1a29eff74660,kubernetes.io/config.seen: 2024-01-03T20:13:08.010857641Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36949c267ab4e5f7d9f22aaf53fc1ad96fcf391487332a1c095b0c79c1ef00ad,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-451331,Uid:63c4c7fb050d98f09cd0c55a15d3f146,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704312788519962457,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-451
331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63c4c7fb050d98f09cd0c55a15d3f146,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 63c4c7fb050d98f09cd0c55a15d3f146,kubernetes.io/config.seen: 2024-01-03T20:13:08.010856767Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3023709de312df72460936079c9b7e303b80a5a349e0175a734d680329347254,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-451331,Uid:b98fe1c42fefc48f470b8f9db70b8685,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704312788511302387,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98fe1c42fefc48f470b8f9db70b8685,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.197:8443,kubernetes.io/config.hash: b98fe1c42fefc48f470b8f9db7
0b8685,kubernetes.io/config.seen: 2024-01-03T20:13:08.010852263Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=b6cef565-ddeb-4225-a017-79dfcb5c54ce name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.046373273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d433f6d0-f54d-481c-977d-92db1002094d name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.046549858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d433f6d0-f54d-481c-977d-92db1002094d name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.046724614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312827279257291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-cef5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac00312e7c188202128410fbd7a837dc9109127b647d5402eb8e9662c9af403,PodSandboxId:b651f1b60878ca94ac4fe1055555d60d1750f986c5c3d804b23583d7d7ac9166,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312806973068085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 429c2056-bdb7-4ef4-9e0a-1689542c977e,},Annotations:map[string]string{io.kubernetes.container.hash: a819efdb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b,PodSandboxId:5899d9b99bb80a0595e45a7a5d53017ec4cd2982219645bab2c8d682b07da88b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312803919406082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx6gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4ea161-1a32-4c3b-9a0d-b4c596492d8b,},Annotations:map[string]string{io.kubernetes.container.hash: a0f49294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf,PodSandboxId:ed76d9d3acd8a38a86208b4ddf1aa6c578e079c645aa6a9cdb5cba5f2a036ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312796341925081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f00cf1-
e9c4-442b-a6b3-b633252b840c,},Annotations:map[string]string{io.kubernetes.container.hash: 59f57478,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d,PodSandboxId:fc7a4a9b7f40330f15b6beedc9ce4706823549eed5d11ada2261689174c6f633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312789595901237,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b202e71ceb565a3
c0d5e1a29eff74660,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523,PodSandboxId:36949c267ab4e5f7d9f22aaf53fc1ad96fcf391487332a1c095b0c79c1ef00ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312789369771905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 63c4c7fb050d98f09cd0c55a15d3f146,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40,PodSandboxId:347a463a5517897350359189bfcd8196e5a4353788e5cdf70557feac357e76c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312789324121741,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb324b9ebe7e80d000d3e5358d033c1a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 17c5f498,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6,PodSandboxId:3023709de312df72460936079c9b7e303b80a5a349e0175a734d680329347254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312788995177952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98fe1c42fefc48f470b8f9db70b8685,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8a333982,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d433f6d0-f54d-481c-977d-92db1002094d name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.087935400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d668d764-eabf-44e5-9d1d-5bdf28c0c925 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.088042344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d668d764-eabf-44e5-9d1d-5bdf28c0c925 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.089275528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8f2ac0f9-7c90-4bb4-aa2d-51c73c493801 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.089758772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704314050089737243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8f2ac0f9-7c90-4bb4-aa2d-51c73c493801 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.090490481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=05f89bd2-cbee-4357-b575-3593a0d30017 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.090559512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=05f89bd2-cbee-4357-b575-3593a0d30017 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.090749694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312827279257291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-cef5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac00312e7c188202128410fbd7a837dc9109127b647d5402eb8e9662c9af403,PodSandboxId:b651f1b60878ca94ac4fe1055555d60d1750f986c5c3d804b23583d7d7ac9166,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312806973068085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 429c2056-bdb7-4ef4-9e0a-1689542c977e,},Annotations:map[string]string{io.kubernetes.container.hash: a819efdb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b,PodSandboxId:5899d9b99bb80a0595e45a7a5d53017ec4cd2982219645bab2c8d682b07da88b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312803919406082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx6gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4ea161-1a32-4c3b-9a0d-b4c596492d8b,},Annotations:map[string]string{io.kubernetes.container.hash: a0f49294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf,PodSandboxId:ed76d9d3acd8a38a86208b4ddf1aa6c578e079c645aa6a9cdb5cba5f2a036ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312796341925081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f00cf1-
e9c4-442b-a6b3-b633252b840c,},Annotations:map[string]string{io.kubernetes.container.hash: 59f57478,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312796003114553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-ce
f5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d,PodSandboxId:fc7a4a9b7f40330f15b6beedc9ce4706823549eed5d11ada2261689174c6f633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312789595901237,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b202e71ceb565a3c0
d5e1a29eff74660,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523,PodSandboxId:36949c267ab4e5f7d9f22aaf53fc1ad96fcf391487332a1c095b0c79c1ef00ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312789369771905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 63c4c7fb050d98f09cd0c55a15d3f146,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40,PodSandboxId:347a463a5517897350359189bfcd8196e5a4353788e5cdf70557feac357e76c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312789324121741,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb324b9ebe7e80d000d3e5358d033c1a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 17c5f498,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6,PodSandboxId:3023709de312df72460936079c9b7e303b80a5a349e0175a734d680329347254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312788995177952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98fe1c42fefc48f470b8f9db70b8685,},Annotations:map[
string]string{io.kubernetes.container.hash: 8a333982,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=05f89bd2-cbee-4357-b575-3593a0d30017 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.133992719Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c65563ab-7820-496f-9e3b-1bbb2694ba10 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.134126572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c65563ab-7820-496f-9e3b-1bbb2694ba10 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.136226451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=08b2302d-cc6f-4e39-8c18-b426745388a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.136605186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704314050136590276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=08b2302d-cc6f-4e39-8c18-b426745388a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.137380637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c54a41d2-d0f4-44c1-adab-bda5b2360885 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.137460705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c54a41d2-d0f4-44c1-adab-bda5b2360885 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.141725375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312827279257291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-cef5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac00312e7c188202128410fbd7a837dc9109127b647d5402eb8e9662c9af403,PodSandboxId:b651f1b60878ca94ac4fe1055555d60d1750f986c5c3d804b23583d7d7ac9166,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312806973068085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 429c2056-bdb7-4ef4-9e0a-1689542c977e,},Annotations:map[string]string{io.kubernetes.container.hash: a819efdb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b,PodSandboxId:5899d9b99bb80a0595e45a7a5d53017ec4cd2982219645bab2c8d682b07da88b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312803919406082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx6gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4ea161-1a32-4c3b-9a0d-b4c596492d8b,},Annotations:map[string]string{io.kubernetes.container.hash: a0f49294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf,PodSandboxId:ed76d9d3acd8a38a86208b4ddf1aa6c578e079c645aa6a9cdb5cba5f2a036ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312796341925081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f00cf1-
e9c4-442b-a6b3-b633252b840c,},Annotations:map[string]string{io.kubernetes.container.hash: 59f57478,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312796003114553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-ce
f5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d,PodSandboxId:fc7a4a9b7f40330f15b6beedc9ce4706823549eed5d11ada2261689174c6f633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312789595901237,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b202e71ceb565a3c0
d5e1a29eff74660,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523,PodSandboxId:36949c267ab4e5f7d9f22aaf53fc1ad96fcf391487332a1c095b0c79c1ef00ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312789369771905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 63c4c7fb050d98f09cd0c55a15d3f146,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40,PodSandboxId:347a463a5517897350359189bfcd8196e5a4353788e5cdf70557feac357e76c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312789324121741,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb324b9ebe7e80d000d3e5358d033c1a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 17c5f498,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6,PodSandboxId:3023709de312df72460936079c9b7e303b80a5a349e0175a734d680329347254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312788995177952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98fe1c42fefc48f470b8f9db70b8685,},Annotations:map[
string]string{io.kubernetes.container.hash: 8a333982,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c54a41d2-d0f4-44c1-adab-bda5b2360885 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.183928377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=10858c83-87ad-4f61-b200-4bee854979df name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.184018417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=10858c83-87ad-4f61-b200-4bee854979df name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.185475108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=209d0306-a201-465b-aa14-f0d3324d7e2f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.186006471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704314050185989426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=209d0306-a201-465b-aa14-f0d3324d7e2f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.186558433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dbdda429-2a0e-4be0-8c95-ef375551db01 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.186633646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dbdda429-2a0e-4be0-8c95-ef375551db01 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:10 embed-certs-451331 crio[714]: time="2024-01-03 20:34:10.186919863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312827279257291,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-cef5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac00312e7c188202128410fbd7a837dc9109127b647d5402eb8e9662c9af403,PodSandboxId:b651f1b60878ca94ac4fe1055555d60d1750f986c5c3d804b23583d7d7ac9166,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312806973068085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 429c2056-bdb7-4ef4-9e0a-1689542c977e,},Annotations:map[string]string{io.kubernetes.container.hash: a819efdb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b,PodSandboxId:5899d9b99bb80a0595e45a7a5d53017ec4cd2982219645bab2c8d682b07da88b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312803919406082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx6gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4ea161-1a32-4c3b-9a0d-b4c596492d8b,},Annotations:map[string]string{io.kubernetes.container.hash: a0f49294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf,PodSandboxId:ed76d9d3acd8a38a86208b4ddf1aa6c578e079c645aa6a9cdb5cba5f2a036ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312796341925081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f00cf1-
e9c4-442b-a6b3-b633252b840c,},Annotations:map[string]string{io.kubernetes.container.hash: 59f57478,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2,PodSandboxId:efd4060c8de3f71163c1e9350215ce5da237ea9fc1c3dd46467cebe2f5c06e3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312796003114553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbce49e7-ce
f5-40a1-a017-906fcc77ef66,},Annotations:map[string]string{io.kubernetes.container.hash: eadca64e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d,PodSandboxId:fc7a4a9b7f40330f15b6beedc9ce4706823549eed5d11ada2261689174c6f633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312789595901237,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b202e71ceb565a3c0
d5e1a29eff74660,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523,PodSandboxId:36949c267ab4e5f7d9f22aaf53fc1ad96fcf391487332a1c095b0c79c1ef00ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312789369771905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 63c4c7fb050d98f09cd0c55a15d3f146,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40,PodSandboxId:347a463a5517897350359189bfcd8196e5a4353788e5cdf70557feac357e76c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312789324121741,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb324b9ebe7e80d000d3e5358d033c1a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 17c5f498,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6,PodSandboxId:3023709de312df72460936079c9b7e303b80a5a349e0175a734d680329347254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312788995177952,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-451331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98fe1c42fefc48f470b8f9db70b8685,},Annotations:map[
string]string{io.kubernetes.container.hash: 8a333982,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dbdda429-2a0e-4be0-8c95-ef375551db01 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0ed16e65a5dba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   efd4060c8de3f       storage-provisioner
	3ac00312e7c18       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   b651f1b60878c       busybox
	e982a226a7c2e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   5899d9b99bb80       coredns-5dd5756b68-sx6gg
	a076ccb3aaf52       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      20 minutes ago      Running             kube-proxy                1                   ed76d9d3acd8a       kube-proxy-fsnb9
	3c57ed4c58edf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   efd4060c8de3f       storage-provisioner
	91cc8e54c59c4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   fc7a4a9b7f403       kube-scheduler-embed-certs-451331
	8049f81441fd2       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   36949c267ab4e       kube-controller-manager-embed-certs-451331
	d5b2310ec90e1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   347a463a55178       etcd-embed-certs-451331
	b43e6c342d85d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   3023709de312d       kube-apiserver-embed-certs-451331
	
	
	==> coredns [e982a226a7c2e7dda117aabd99279729198043d8b9c79fdc0904c8384f9a031b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43615 - 38443 "HINFO IN 5833282349375032069.6189678721608338515. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00613889s
	
	
	==> describe nodes <==
	Name:               embed-certs-451331
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-451331
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=embed-certs-451331
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_04_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:04:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-451331
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:34:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:34:08 +0000   Wed, 03 Jan 2024 20:04:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:34:08 +0000   Wed, 03 Jan 2024 20:04:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:34:08 +0000   Wed, 03 Jan 2024 20:04:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:34:08 +0000   Wed, 03 Jan 2024 20:13:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.197
	  Hostname:    embed-certs-451331
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 43999723e8714a46b9bb7ee411ed1129
	  System UUID:                43999723-e871-4a46-b9bb-7ee411ed1129
	  Boot ID:                    3cd38969-9396-4492-a5a4-e874524061f1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-sx6gg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-451331                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-451331             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-451331    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-fsnb9                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-451331             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-sm8rb               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-451331 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-451331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-451331 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node embed-certs-451331 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-451331 event: Registered Node embed-certs-451331 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-451331 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-451331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-451331 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-451331 event: Registered Node embed-certs-451331 in Controller
	
	
	==> dmesg <==
	[Jan 3 20:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062573] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.332954] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.318945] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.129871] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.546459] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.045258] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.112979] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.153194] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.116447] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.226191] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Jan 3 20:13] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[ +15.499205] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [d5b2310ec90e1b2cf6666d6b054b6e3233664a226da6f2c5dbb9f1aad6bf5e40] <==
	{"level":"info","ts":"2024-01-03T20:13:12.738402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 received MsgPreVoteResp from 37309ea842b3f618 at term 2"}
	{"level":"info","ts":"2024-01-03T20:13:12.738428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 became candidate at term 3"}
	{"level":"info","ts":"2024-01-03T20:13:12.738437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 received MsgVoteResp from 37309ea842b3f618 at term 3"}
	{"level":"info","ts":"2024-01-03T20:13:12.738449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37309ea842b3f618 became leader at term 3"}
	{"level":"info","ts":"2024-01-03T20:13:12.738459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 37309ea842b3f618 elected leader 37309ea842b3f618 at term 3"}
	{"level":"info","ts":"2024-01-03T20:13:12.741273Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"37309ea842b3f618","local-member-attributes":"{Name:embed-certs-451331 ClientURLs:[https://192.168.50.197:2379]}","request-path":"/0/members/37309ea842b3f618/attributes","cluster-id":"b82d2d0acaa655b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T20:13:12.741325Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:13:12.741588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T20:13:12.741662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T20:13:12.741714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:13:12.742523Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T20:13:12.743382Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.197:2379"}
	{"level":"info","ts":"2024-01-03T20:13:16.312211Z","caller":"traceutil/trace.go:171","msg":"trace[196740903] transaction","detail":"{read_only:false; number_of_response:0; response_revision:493; }","duration":"101.15089ms","start":"2024-01-03T20:13:16.211039Z","end":"2024-01-03T20:13:16.31219Z","steps":["trace[196740903] 'process raft request'  (duration: 101.09332ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:27.541586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.306999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-sm8rb\" ","response":"range_response_count:1 size:4071"}
	{"level":"info","ts":"2024-01-03T20:13:27.541685Z","caller":"traceutil/trace.go:171","msg":"trace[1976195513] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-sm8rb; range_end:; response_count:1; response_revision:589; }","duration":"119.452704ms","start":"2024-01-03T20:13:27.422218Z","end":"2024-01-03T20:13:27.541671Z","steps":["trace[1976195513] 'range keys from in-memory index tree'  (duration: 119.067249ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T20:23:12.783931Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":833}
	{"level":"info","ts":"2024-01-03T20:23:12.787276Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":833,"took":"2.323476ms","hash":845675121}
	{"level":"info","ts":"2024-01-03T20:23:12.787369Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":845675121,"revision":833,"compact-revision":-1}
	{"level":"info","ts":"2024-01-03T20:28:12.790897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1075}
	{"level":"info","ts":"2024-01-03T20:28:12.793376Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1075,"took":"2.165917ms","hash":682595211}
	{"level":"info","ts":"2024-01-03T20:28:12.793451Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":682595211,"revision":1075,"compact-revision":833}
	{"level":"info","ts":"2024-01-03T20:32:53.542108Z","caller":"traceutil/trace.go:171","msg":"trace[149168729] transaction","detail":"{read_only:false; response_revision:1545; number_of_response:1; }","duration":"222.969893ms","start":"2024-01-03T20:32:53.319102Z","end":"2024-01-03T20:32:53.542072Z","steps":["trace[149168729] 'process raft request'  (duration: 222.810553ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T20:33:12.800949Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1318}
	{"level":"info","ts":"2024-01-03T20:33:12.803101Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1318,"took":"1.856082ms","hash":3524453962}
	{"level":"info","ts":"2024-01-03T20:33:12.803165Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3524453962,"revision":1318,"compact-revision":1075}
	
	
	==> kernel <==
	 20:34:10 up 21 min,  0 users,  load average: 0.17, 0.19, 0.17
	Linux embed-certs-451331 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b43e6c342d85d7f5a7233dc5c79962eece071e18b4bbcd6583c66d34a8bba0d6] <==
	E0103 20:29:15.472536       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:29:15.472568       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:30:14.352314       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0103 20:31:14.352585       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:31:15.471309       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:31:15.471417       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:31:15.471544       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:31:15.472842       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:31:15.472968       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:31:15.472996       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:32:14.352420       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0103 20:33:14.351974       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:33:14.473914       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:33:14.474052       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:33:14.474539       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:33:15.475037       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:33:15.475138       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:33:15.475164       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:33:15.475169       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:33:15.475271       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:33:15.476423       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8049f81441fd2e7bb236ca49b119c49cefb7a5c69b6880f606b3cc2789145523] <==
	I0103 20:28:28.102371       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:28:57.557429       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:28:58.111233       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:29:25.064290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="600.157µs"
	E0103 20:29:27.572106       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:29:28.125849       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:29:40.068598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="597.843µs"
	E0103 20:29:57.577759       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:29:58.139492       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:30:27.586158       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:30:28.147953       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:30:57.592259       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:30:58.159216       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:31:27.600501       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:31:28.169554       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:31:57.608145       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:31:58.178905       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:32:27.616436       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:32:28.191242       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:32:57.623255       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:32:58.201297       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:33:27.633131       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:33:28.209674       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:33:57.639077       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:33:58.220418       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a076ccb3aaf528efca2f2b4a38d06dc3b8376212edc210731b7c268a24909ccf] <==
	I0103 20:13:16.668211       1 server_others.go:69] "Using iptables proxy"
	I0103 20:13:16.683293       1 node.go:141] Successfully retrieved node IP: 192.168.50.197
	I0103 20:13:16.739930       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0103 20:13:16.740003       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 20:13:16.742934       1 server_others.go:152] "Using iptables Proxier"
	I0103 20:13:16.743012       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 20:13:16.743237       1 server.go:846] "Version info" version="v1.28.4"
	I0103 20:13:16.743280       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:16.744157       1 config.go:188] "Starting service config controller"
	I0103 20:13:16.744211       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 20:13:16.744244       1 config.go:97] "Starting endpoint slice config controller"
	I0103 20:13:16.744260       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 20:13:16.746071       1 config.go:315] "Starting node config controller"
	I0103 20:13:16.746113       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 20:13:16.844382       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 20:13:16.844484       1 shared_informer.go:318] Caches are synced for service config
	I0103 20:13:16.847046       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [91cc8e54c59c4642dc1a808adf81ed78af36cc6dae57d37fd913d2d01d232f3d] <==
	I0103 20:13:11.666382       1 serving.go:348] Generated self-signed cert in-memory
	W0103 20:13:14.432179       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:13:14.432269       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:13:14.432300       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:13:14.432323       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:13:14.454344       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0103 20:13:14.454390       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:14.456408       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:13:14.456532       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:13:14.458329       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:13:14.458401       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:13:14.557687       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 20:12:41 UTC, ends at Wed 2024-01-03 20:34:10 UTC. --
	Jan 03 20:31:40 embed-certs-451331 kubelet[920]: E0103 20:31:40.046121     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:31:55 embed-certs-451331 kubelet[920]: E0103 20:31:55.045759     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:32:08 embed-certs-451331 kubelet[920]: E0103 20:32:08.073293     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:32:08 embed-certs-451331 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:32:08 embed-certs-451331 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:32:08 embed-certs-451331 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:32:11 embed-certs-451331 kubelet[920]: E0103 20:32:11.045650     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:32:22 embed-certs-451331 kubelet[920]: E0103 20:32:22.046139     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:32:33 embed-certs-451331 kubelet[920]: E0103 20:32:33.045944     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:32:47 embed-certs-451331 kubelet[920]: E0103 20:32:47.045632     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:33:02 embed-certs-451331 kubelet[920]: E0103 20:33:02.046575     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:33:08 embed-certs-451331 kubelet[920]: E0103 20:33:08.073575     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:33:08 embed-certs-451331 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:33:08 embed-certs-451331 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:33:08 embed-certs-451331 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:33:08 embed-certs-451331 kubelet[920]: E0103 20:33:08.077027     920 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 03 20:33:16 embed-certs-451331 kubelet[920]: E0103 20:33:16.047582     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:33:28 embed-certs-451331 kubelet[920]: E0103 20:33:28.049562     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:33:42 embed-certs-451331 kubelet[920]: E0103 20:33:42.049561     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:33:55 embed-certs-451331 kubelet[920]: E0103 20:33:55.046200     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:34:08 embed-certs-451331 kubelet[920]: E0103 20:34:08.047072     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sm8rb" podUID="12b9f83d-abf8-431c-a271-b8489d32f0de"
	Jan 03 20:34:08 embed-certs-451331 kubelet[920]: E0103 20:34:08.072047     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:34:08 embed-certs-451331 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:34:08 embed-certs-451331 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:34:08 embed-certs-451331 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [0ed16e65a5dbabdbadbf6d56a4fd7bafe80d50867ed1829e86bfa8f28e8fb719] <==
	I0103 20:13:47.454346       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:13:47.469076       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:13:47.469201       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:14:04.879494       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:14:04.879712       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-451331_24b7586b-b269-4a34-a6ee-21fcdf43cedc!
	I0103 20:14:04.881294       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7f931a4b-3ae8-49f4-84c3-558c77e6b271", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-451331_24b7586b-b269-4a34-a6ee-21fcdf43cedc became leader
	I0103 20:14:04.980387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-451331_24b7586b-b269-4a34-a6ee-21fcdf43cedc!
	
	
	==> storage-provisioner [3c57ed4c58edf02d0e73cd6dfc6799993556d74916ffe4c07b91d85346f291e2] <==
	I0103 20:13:16.508894       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0103 20:13:46.517331       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-451331 -n embed-certs-451331
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-451331 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-sm8rb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-451331 describe pod metrics-server-57f55c9bc5-sm8rb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-451331 describe pod metrics-server-57f55c9bc5-sm8rb: exit status 1 (69.791769ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-sm8rb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-451331 describe pod metrics-server-57f55c9bc5-sm8rb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (449.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (346.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-749210 -n no-preload-749210
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-03 20:33:01.711465224 +0000 UTC m=+5747.184042208
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-749210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-749210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.73µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-749210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-749210 -n no-preload-749210
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-749210 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-749210 logs -n 25: (1.352380258s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo                                  | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo find                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-719541 sudo crio                             | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-719541                                       | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-350596 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | disable-driver-mounts-350596                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:06 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-927922        | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451331            | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-749210             | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018788  | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-927922             | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC | 03 Jan 24 20:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451331                 | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC | 03 Jan 24 20:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-749210                  | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018788       | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:32 UTC | 03 Jan 24 20:32 UTC |
	| start   | -p newest-cni-195281 --memory=2200 --alsologtostderr   | newest-cni-195281            | jenkins | v1.32.0 | 03 Jan 24 20:32 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:32:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:32:19.309136   67249 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:32:19.309476   67249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:32:19.309490   67249 out.go:309] Setting ErrFile to fd 2...
	I0103 20:32:19.309497   67249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:32:19.309714   67249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:32:19.310342   67249 out.go:303] Setting JSON to false
	I0103 20:32:19.311306   67249 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8087,"bootTime":1704305853,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 20:32:19.311373   67249 start.go:138] virtualization: kvm guest
	I0103 20:32:19.314262   67249 out.go:177] * [newest-cni-195281] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 20:32:19.316078   67249 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:32:19.316020   67249 notify.go:220] Checking for updates...
	I0103 20:32:19.318020   67249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:32:19.319745   67249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:32:19.321476   67249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:32:19.323306   67249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 20:32:19.325247   67249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:32:19.327385   67249 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:32:19.327493   67249 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:32:19.327621   67249 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:32:19.327723   67249 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:32:19.368449   67249 out.go:177] * Using the kvm2 driver based on user configuration
	I0103 20:32:19.369981   67249 start.go:298] selected driver: kvm2
	I0103 20:32:19.369999   67249 start.go:902] validating driver "kvm2" against <nil>
	I0103 20:32:19.370010   67249 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:32:19.370814   67249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:32:19.370900   67249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 20:32:19.386697   67249 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 20:32:19.386765   67249 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0103 20:32:19.386794   67249 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0103 20:32:19.387069   67249 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0103 20:32:19.387130   67249 cni.go:84] Creating CNI manager for ""
	I0103 20:32:19.387146   67249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:32:19.387180   67249 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0103 20:32:19.387187   67249 start_flags.go:323] config:
	{Name:newest-cni-195281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:32:19.387359   67249 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:32:19.390156   67249 out.go:177] * Starting control plane node newest-cni-195281 in cluster newest-cni-195281
	I0103 20:32:19.391874   67249 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:32:19.391934   67249 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0103 20:32:19.391952   67249 cache.go:56] Caching tarball of preloaded images
	I0103 20:32:19.392059   67249 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 20:32:19.392071   67249 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0103 20:32:19.392191   67249 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json ...
	I0103 20:32:19.392208   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json: {Name:mk604433cce431aecc704e6ae9cbe8e69956f33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:19.392355   67249 start.go:365] acquiring machines lock for newest-cni-195281: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:32:19.392390   67249 start.go:369] acquired machines lock for "newest-cni-195281" in 22.434µs
	I0103 20:32:19.392407   67249 start.go:93] Provisioning new machine with config: &{Name:newest-cni-195281 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:32:19.392486   67249 start.go:125] createHost starting for "" (driver="kvm2")
	I0103 20:32:19.394467   67249 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0103 20:32:19.394687   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:32:19.394745   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:32:19.410171   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0103 20:32:19.410720   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:32:19.411315   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:32:19.411339   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:32:19.411722   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:32:19.411889   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:19.412083   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:19.412262   67249 start.go:159] libmachine.API.Create for "newest-cni-195281" (driver="kvm2")
	I0103 20:32:19.412296   67249 client.go:168] LocalClient.Create starting
	I0103 20:32:19.412334   67249 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem
	I0103 20:32:19.412371   67249 main.go:141] libmachine: Decoding PEM data...
	I0103 20:32:19.412386   67249 main.go:141] libmachine: Parsing certificate...
	I0103 20:32:19.412440   67249 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem
	I0103 20:32:19.412472   67249 main.go:141] libmachine: Decoding PEM data...
	I0103 20:32:19.412486   67249 main.go:141] libmachine: Parsing certificate...
	I0103 20:32:19.412501   67249 main.go:141] libmachine: Running pre-create checks...
	I0103 20:32:19.412510   67249 main.go:141] libmachine: (newest-cni-195281) Calling .PreCreateCheck
	I0103 20:32:19.412860   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetConfigRaw
	I0103 20:32:19.413237   67249 main.go:141] libmachine: Creating machine...
	I0103 20:32:19.413252   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Create
	I0103 20:32:19.413368   67249 main.go:141] libmachine: (newest-cni-195281) Creating KVM machine...
	I0103 20:32:19.414780   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found existing default KVM network
	I0103 20:32:19.416065   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.415922   67271 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:55:bb} reservation:<nil>}
	I0103 20:32:19.417061   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.416867   67271 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e5:bd:db} reservation:<nil>}
	I0103 20:32:19.417786   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.417674   67271 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ae:17:ed} reservation:<nil>}
	I0103 20:32:19.418963   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.418888   67271 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027f800}
	I0103 20:32:19.425096   67249 main.go:141] libmachine: (newest-cni-195281) DBG | trying to create private KVM network mk-newest-cni-195281 192.168.72.0/24...
	I0103 20:32:19.509409   67249 main.go:141] libmachine: (newest-cni-195281) Setting up store path in /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281 ...
	I0103 20:32:19.509454   67249 main.go:141] libmachine: (newest-cni-195281) DBG | private KVM network mk-newest-cni-195281 192.168.72.0/24 created
	I0103 20:32:19.509473   67249 main.go:141] libmachine: (newest-cni-195281) Building disk image from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 20:32:19.509514   67249 main.go:141] libmachine: (newest-cni-195281) Downloading /home/jenkins/minikube-integration/17885-9609/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0103 20:32:19.509675   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.509290   67271 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:32:19.721072   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.720924   67271 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa...
	I0103 20:32:19.797041   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.796916   67271 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/newest-cni-195281.rawdisk...
	I0103 20:32:19.797066   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Writing magic tar header
	I0103 20:32:19.797080   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Writing SSH key tar header
	I0103 20:32:19.797089   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.797050   67271 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281 ...
	I0103 20:32:19.797185   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281
	I0103 20:32:19.797212   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281 (perms=drwx------)
	I0103 20:32:19.797223   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines (perms=drwxr-xr-x)
	I0103 20:32:19.797237   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines
	I0103 20:32:19.797270   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:32:19.797283   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609
	I0103 20:32:19.797291   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0103 20:32:19.797298   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins
	I0103 20:32:19.797330   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube (perms=drwxr-xr-x)
	I0103 20:32:19.797359   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home
	I0103 20:32:19.797376   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609 (perms=drwxrwxr-x)
	I0103 20:32:19.797390   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Skipping /home - not owner
	I0103 20:32:19.797420   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0103 20:32:19.797443   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0103 20:32:19.797465   67249 main.go:141] libmachine: (newest-cni-195281) Creating domain...
	I0103 20:32:19.798661   67249 main.go:141] libmachine: (newest-cni-195281) define libvirt domain using xml: 
	I0103 20:32:19.798699   67249 main.go:141] libmachine: (newest-cni-195281) <domain type='kvm'>
	I0103 20:32:19.798733   67249 main.go:141] libmachine: (newest-cni-195281)   <name>newest-cni-195281</name>
	I0103 20:32:19.798765   67249 main.go:141] libmachine: (newest-cni-195281)   <memory unit='MiB'>2200</memory>
	I0103 20:32:19.798780   67249 main.go:141] libmachine: (newest-cni-195281)   <vcpu>2</vcpu>
	I0103 20:32:19.798790   67249 main.go:141] libmachine: (newest-cni-195281)   <features>
	I0103 20:32:19.798802   67249 main.go:141] libmachine: (newest-cni-195281)     <acpi/>
	I0103 20:32:19.798814   67249 main.go:141] libmachine: (newest-cni-195281)     <apic/>
	I0103 20:32:19.798826   67249 main.go:141] libmachine: (newest-cni-195281)     <pae/>
	I0103 20:32:19.798836   67249 main.go:141] libmachine: (newest-cni-195281)     
	I0103 20:32:19.798862   67249 main.go:141] libmachine: (newest-cni-195281)   </features>
	I0103 20:32:19.798981   67249 main.go:141] libmachine: (newest-cni-195281)   <cpu mode='host-passthrough'>
	I0103 20:32:19.799017   67249 main.go:141] libmachine: (newest-cni-195281)   
	I0103 20:32:19.799041   67249 main.go:141] libmachine: (newest-cni-195281)   </cpu>
	I0103 20:32:19.799055   67249 main.go:141] libmachine: (newest-cni-195281)   <os>
	I0103 20:32:19.799068   67249 main.go:141] libmachine: (newest-cni-195281)     <type>hvm</type>
	I0103 20:32:19.799083   67249 main.go:141] libmachine: (newest-cni-195281)     <boot dev='cdrom'/>
	I0103 20:32:19.799096   67249 main.go:141] libmachine: (newest-cni-195281)     <boot dev='hd'/>
	I0103 20:32:19.799111   67249 main.go:141] libmachine: (newest-cni-195281)     <bootmenu enable='no'/>
	I0103 20:32:19.799123   67249 main.go:141] libmachine: (newest-cni-195281)   </os>
	I0103 20:32:19.799136   67249 main.go:141] libmachine: (newest-cni-195281)   <devices>
	I0103 20:32:19.799152   67249 main.go:141] libmachine: (newest-cni-195281)     <disk type='file' device='cdrom'>
	I0103 20:32:19.799170   67249 main.go:141] libmachine: (newest-cni-195281)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/boot2docker.iso'/>
	I0103 20:32:19.799186   67249 main.go:141] libmachine: (newest-cni-195281)       <target dev='hdc' bus='scsi'/>
	I0103 20:32:19.799199   67249 main.go:141] libmachine: (newest-cni-195281)       <readonly/>
	I0103 20:32:19.799223   67249 main.go:141] libmachine: (newest-cni-195281)     </disk>
	I0103 20:32:19.799240   67249 main.go:141] libmachine: (newest-cni-195281)     <disk type='file' device='disk'>
	I0103 20:32:19.799264   67249 main.go:141] libmachine: (newest-cni-195281)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0103 20:32:19.799305   67249 main.go:141] libmachine: (newest-cni-195281)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/newest-cni-195281.rawdisk'/>
	I0103 20:32:19.799322   67249 main.go:141] libmachine: (newest-cni-195281)       <target dev='hda' bus='virtio'/>
	I0103 20:32:19.799333   67249 main.go:141] libmachine: (newest-cni-195281)     </disk>
	I0103 20:32:19.799344   67249 main.go:141] libmachine: (newest-cni-195281)     <interface type='network'>
	I0103 20:32:19.799357   67249 main.go:141] libmachine: (newest-cni-195281)       <source network='mk-newest-cni-195281'/>
	I0103 20:32:19.799371   67249 main.go:141] libmachine: (newest-cni-195281)       <model type='virtio'/>
	I0103 20:32:19.799383   67249 main.go:141] libmachine: (newest-cni-195281)     </interface>
	I0103 20:32:19.799397   67249 main.go:141] libmachine: (newest-cni-195281)     <interface type='network'>
	I0103 20:32:19.799409   67249 main.go:141] libmachine: (newest-cni-195281)       <source network='default'/>
	I0103 20:32:19.799423   67249 main.go:141] libmachine: (newest-cni-195281)       <model type='virtio'/>
	I0103 20:32:19.799436   67249 main.go:141] libmachine: (newest-cni-195281)     </interface>
	I0103 20:32:19.799451   67249 main.go:141] libmachine: (newest-cni-195281)     <serial type='pty'>
	I0103 20:32:19.799463   67249 main.go:141] libmachine: (newest-cni-195281)       <target port='0'/>
	I0103 20:32:19.799483   67249 main.go:141] libmachine: (newest-cni-195281)     </serial>
	I0103 20:32:19.799496   67249 main.go:141] libmachine: (newest-cni-195281)     <console type='pty'>
	I0103 20:32:19.799515   67249 main.go:141] libmachine: (newest-cni-195281)       <target type='serial' port='0'/>
	I0103 20:32:19.799534   67249 main.go:141] libmachine: (newest-cni-195281)     </console>
	I0103 20:32:19.799552   67249 main.go:141] libmachine: (newest-cni-195281)     <rng model='virtio'>
	I0103 20:32:19.799565   67249 main.go:141] libmachine: (newest-cni-195281)       <backend model='random'>/dev/random</backend>
	I0103 20:32:19.799580   67249 main.go:141] libmachine: (newest-cni-195281)     </rng>
	I0103 20:32:19.799592   67249 main.go:141] libmachine: (newest-cni-195281)     
	I0103 20:32:19.799605   67249 main.go:141] libmachine: (newest-cni-195281)     
	I0103 20:32:19.799614   67249 main.go:141] libmachine: (newest-cni-195281)   </devices>
	I0103 20:32:19.799626   67249 main.go:141] libmachine: (newest-cni-195281) </domain>
	I0103 20:32:19.799640   67249 main.go:141] libmachine: (newest-cni-195281) 
	I0103 20:32:19.803863   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:21:41:b4 in network default
	I0103 20:32:19.804577   67249 main.go:141] libmachine: (newest-cni-195281) Ensuring networks are active...
	I0103 20:32:19.804622   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:19.805388   67249 main.go:141] libmachine: (newest-cni-195281) Ensuring network default is active
	I0103 20:32:19.805848   67249 main.go:141] libmachine: (newest-cni-195281) Ensuring network mk-newest-cni-195281 is active
	I0103 20:32:19.806341   67249 main.go:141] libmachine: (newest-cni-195281) Getting domain xml...
	I0103 20:32:19.807082   67249 main.go:141] libmachine: (newest-cni-195281) Creating domain...
	I0103 20:32:21.132770   67249 main.go:141] libmachine: (newest-cni-195281) Waiting to get IP...
	I0103 20:32:21.134841   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:21.135341   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:21.135366   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:21.135310   67271 retry.go:31] will retry after 211.135104ms: waiting for machine to come up
	I0103 20:32:21.347666   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:21.348235   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:21.348261   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:21.348145   67271 retry.go:31] will retry after 323.28225ms: waiting for machine to come up
	I0103 20:32:21.672767   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:21.673311   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:21.673343   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:21.673263   67271 retry.go:31] will retry after 371.328166ms: waiting for machine to come up
	I0103 20:32:22.045877   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:22.046594   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:22.046630   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:22.046495   67271 retry.go:31] will retry after 424.478536ms: waiting for machine to come up
	I0103 20:32:22.472185   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:22.472629   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:22.472661   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:22.472550   67271 retry.go:31] will retry after 661.63112ms: waiting for machine to come up
	I0103 20:32:23.135501   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:23.135980   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:23.136011   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:23.135936   67271 retry.go:31] will retry after 627.099478ms: waiting for machine to come up
	I0103 20:32:23.764511   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:23.764964   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:23.764993   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:23.764917   67271 retry.go:31] will retry after 1.023643059s: waiting for machine to come up
	I0103 20:32:24.790457   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:24.791000   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:24.791033   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:24.790947   67271 retry.go:31] will retry after 1.372445622s: waiting for machine to come up
	I0103 20:32:26.165309   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:26.165782   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:26.165801   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:26.165734   67271 retry.go:31] will retry after 1.684754533s: waiting for machine to come up
	I0103 20:32:27.851684   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:27.852122   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:27.852160   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:27.852062   67271 retry.go:31] will retry after 1.693836467s: waiting for machine to come up
	I0103 20:32:29.547539   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:29.548051   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:29.548080   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:29.548006   67271 retry.go:31] will retry after 2.126952355s: waiting for machine to come up
	I0103 20:32:31.676576   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:31.677064   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:31.677093   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:31.677027   67271 retry.go:31] will retry after 3.435892014s: waiting for machine to come up
	I0103 20:32:35.114880   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:35.115371   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:35.115397   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:35.115298   67271 retry.go:31] will retry after 3.914788696s: waiting for machine to come up
	I0103 20:32:39.034444   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:39.034917   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:39.034950   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:39.034872   67271 retry.go:31] will retry after 5.092646295s: waiting for machine to come up
	I0103 20:32:44.131872   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.132395   67249 main.go:141] libmachine: (newest-cni-195281) Found IP for machine: 192.168.72.219
	I0103 20:32:44.132428   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has current primary IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.132441   67249 main.go:141] libmachine: (newest-cni-195281) Reserving static IP address...
	I0103 20:32:44.132922   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find host DHCP lease matching {name: "newest-cni-195281", mac: "52:54:00:5a:49:af", ip: "192.168.72.219"} in network mk-newest-cni-195281
	I0103 20:32:44.216469   67249 main.go:141] libmachine: (newest-cni-195281) Reserved static IP address: 192.168.72.219
	I0103 20:32:44.216511   67249 main.go:141] libmachine: (newest-cni-195281) Waiting for SSH to be available...
	I0103 20:32:44.216522   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Getting to WaitForSSH function...
	I0103 20:32:44.219743   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.220136   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.220181   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.220352   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Using SSH client type: external
	I0103 20:32:44.220382   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa (-rw-------)
	I0103 20:32:44.220427   67249 main.go:141] libmachine: (newest-cni-195281) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.219 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:32:44.220443   67249 main.go:141] libmachine: (newest-cni-195281) DBG | About to run SSH command:
	I0103 20:32:44.220472   67249 main.go:141] libmachine: (newest-cni-195281) DBG | exit 0
	I0103 20:32:44.358552   67249 main.go:141] libmachine: (newest-cni-195281) DBG | SSH cmd err, output: <nil>: 
	I0103 20:32:44.358866   67249 main.go:141] libmachine: (newest-cni-195281) KVM machine creation complete!
	I0103 20:32:44.359216   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetConfigRaw
	I0103 20:32:44.359752   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:44.359969   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:44.360227   67249 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0103 20:32:44.360257   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:32:44.361613   67249 main.go:141] libmachine: Detecting operating system of created instance...
	I0103 20:32:44.361632   67249 main.go:141] libmachine: Waiting for SSH to be available...
	I0103 20:32:44.361641   67249 main.go:141] libmachine: Getting to WaitForSSH function...
	I0103 20:32:44.361656   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.364691   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.365073   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.365109   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.365248   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.365445   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.365680   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.365808   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.365973   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.366604   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.366626   67249 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0103 20:32:44.493837   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:32:44.493867   67249 main.go:141] libmachine: Detecting the provisioner...
	I0103 20:32:44.493880   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.497161   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.497541   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.497601   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.497794   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.498003   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.498199   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.498363   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.498575   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.499018   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.499033   67249 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0103 20:32:44.623686   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0103 20:32:44.623771   67249 main.go:141] libmachine: found compatible host: buildroot
	I0103 20:32:44.623788   67249 main.go:141] libmachine: Provisioning with buildroot...
	I0103 20:32:44.623798   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:44.624047   67249 buildroot.go:166] provisioning hostname "newest-cni-195281"
	I0103 20:32:44.624075   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:44.624251   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.627016   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.627435   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.627469   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.627629   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.627818   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.627970   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.628153   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.628308   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.628628   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.628643   67249 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-195281 && echo "newest-cni-195281" | sudo tee /etc/hostname
	I0103 20:32:44.766387   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-195281
	
	I0103 20:32:44.766419   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.769605   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.770020   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.770063   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.770286   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.770478   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.770696   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.770855   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.771047   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.771391   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.771416   67249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-195281' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-195281/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-195281' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:32:44.906281   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:32:44.906308   67249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:32:44.906343   67249 buildroot.go:174] setting up certificates
	I0103 20:32:44.906354   67249 provision.go:83] configureAuth start
	I0103 20:32:44.906370   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:44.906662   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:44.909425   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.909736   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.909763   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.909936   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.912539   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.913023   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.913051   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.913266   67249 provision.go:138] copyHostCerts
	I0103 20:32:44.913339   67249 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:32:44.913361   67249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:32:44.913448   67249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:32:44.913580   67249 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:32:44.913592   67249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:32:44.913631   67249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:32:44.913722   67249 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:32:44.913732   67249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:32:44.913769   67249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:32:44.913851   67249 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.newest-cni-195281 san=[192.168.72.219 192.168.72.219 localhost 127.0.0.1 minikube newest-cni-195281]
	I0103 20:32:45.098688   67249 provision.go:172] copyRemoteCerts
	I0103 20:32:45.098762   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:32:45.098793   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.101827   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.102181   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.102213   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.102468   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.102706   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.102868   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.103005   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:45.197407   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:32:45.221474   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0103 20:32:45.244138   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:32:45.268222   67249 provision.go:86] duration metric: configureAuth took 361.849849ms
	I0103 20:32:45.268253   67249 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:32:45.268431   67249 config.go:182] Loaded profile config "newest-cni-195281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:32:45.268531   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.271603   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.272110   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.272146   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.272402   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.272676   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.272851   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.273015   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.273229   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:45.273571   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:45.273593   67249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:32:45.615676   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:32:45.615712   67249 main.go:141] libmachine: Checking connection to Docker...
	I0103 20:32:45.615725   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetURL
	I0103 20:32:45.617050   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Using libvirt version 6000000
	I0103 20:32:45.619845   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.620254   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.620287   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.620398   67249 main.go:141] libmachine: Docker is up and running!
	I0103 20:32:45.620418   67249 main.go:141] libmachine: Reticulating splines...
	I0103 20:32:45.620426   67249 client.go:171] LocalClient.Create took 26.208121017s
	I0103 20:32:45.620449   67249 start.go:167] duration metric: libmachine.API.Create for "newest-cni-195281" took 26.208190465s
	I0103 20:32:45.620456   67249 start.go:300] post-start starting for "newest-cni-195281" (driver="kvm2")
	I0103 20:32:45.620467   67249 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:32:45.620488   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.620753   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:32:45.620791   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.623465   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.623873   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.623902   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.624029   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.624213   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.624385   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.624523   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:45.718372   67249 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:32:45.722729   67249 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:32:45.722762   67249 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:32:45.722864   67249 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:32:45.722984   67249 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:32:45.723125   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:32:45.733617   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:32:45.757682   67249 start.go:303] post-start completed in 137.211001ms
	I0103 20:32:45.757749   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetConfigRaw
	I0103 20:32:45.758396   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:45.761402   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.761798   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.761832   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.762088   67249 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json ...
	I0103 20:32:45.762302   67249 start.go:128] duration metric: createHost completed in 26.369804551s
	I0103 20:32:45.762332   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.764911   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.765288   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.765321   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.765500   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.765694   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.765902   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.766060   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.766292   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:45.766620   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:45.766632   67249 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:32:45.895678   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704313965.882309318
	
	I0103 20:32:45.895711   67249 fix.go:206] guest clock: 1704313965.882309318
	I0103 20:32:45.895722   67249 fix.go:219] Guest: 2024-01-03 20:32:45.882309318 +0000 UTC Remote: 2024-01-03 20:32:45.762315613 +0000 UTC m=+26.509941419 (delta=119.993705ms)
	I0103 20:32:45.895748   67249 fix.go:190] guest clock delta is within tolerance: 119.993705ms
	I0103 20:32:45.895770   67249 start.go:83] releasing machines lock for "newest-cni-195281", held for 26.50335784s
	I0103 20:32:45.895801   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.896111   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:45.898979   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.899363   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.899413   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.899560   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.900114   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.900299   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.900417   67249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:32:45.900468   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.900602   67249 ssh_runner.go:195] Run: cat /version.json
	I0103 20:32:45.900633   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.903625   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.903655   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.904059   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.904096   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.904122   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.904142   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.904262   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.904374   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.904453   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.904522   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.904666   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.904708   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.904838   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:45.904893   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:46.030977   67249 ssh_runner.go:195] Run: systemctl --version
	I0103 20:32:46.037034   67249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:32:46.200079   67249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:32:46.206922   67249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:32:46.207016   67249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:32:46.223019   67249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:32:46.223047   67249 start.go:475] detecting cgroup driver to use...
	I0103 20:32:46.223127   67249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:32:46.239996   67249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:32:46.253612   67249 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:32:46.253699   67249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:32:46.267450   67249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:32:46.282771   67249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:32:46.393693   67249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:32:46.526478   67249 docker.go:219] disabling docker service ...
	I0103 20:32:46.526587   67249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:32:46.540410   67249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:32:46.552921   67249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:32:46.683462   67249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:32:46.805351   67249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:32:46.819457   67249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:32:46.836394   67249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:32:46.836464   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.845831   67249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:32:46.845925   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.855232   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.864892   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.873915   67249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:32:46.883629   67249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:32:46.892075   67249 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:32:46.892200   67249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:32:46.904374   67249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:32:46.913766   67249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:32:47.034679   67249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:32:47.216427   67249 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:32:47.216509   67249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:32:47.222160   67249 start.go:543] Will wait 60s for crictl version
	I0103 20:32:47.222235   67249 ssh_runner.go:195] Run: which crictl
	I0103 20:32:47.226110   67249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:32:47.268069   67249 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:32:47.268163   67249 ssh_runner.go:195] Run: crio --version
	I0103 20:32:47.317148   67249 ssh_runner.go:195] Run: crio --version
	I0103 20:32:47.365121   67249 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0103 20:32:47.366551   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:47.369708   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:47.369977   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:47.369997   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:47.370262   67249 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0103 20:32:47.374478   67249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:32:47.388565   67249 localpath.go:92] copying /home/jenkins/minikube-integration/17885-9609/.minikube/client.crt -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/client.crt
	I0103 20:32:47.388746   67249 localpath.go:117] copying /home/jenkins/minikube-integration/17885-9609/.minikube/client.key -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/client.key
	I0103 20:32:47.390765   67249 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0103 20:32:47.392153   67249 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:32:47.392217   67249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:32:47.427843   67249 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0103 20:32:47.427922   67249 ssh_runner.go:195] Run: which lz4
	I0103 20:32:47.431931   67249 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:32:47.436174   67249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:32:47.436209   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401795125 bytes)
	I0103 20:32:48.886506   67249 crio.go:444] Took 1.454620 seconds to copy over tarball
	I0103 20:32:48.886605   67249 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:32:51.425832   67249 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.539199724s)
	I0103 20:32:51.425868   67249 crio.go:451] Took 2.539326 seconds to extract the tarball
	I0103 20:32:51.425880   67249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:32:51.463537   67249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:32:51.542489   67249 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:32:51.542535   67249 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:32:51.542644   67249 ssh_runner.go:195] Run: crio config
	I0103 20:32:51.604708   67249 cni.go:84] Creating CNI manager for ""
	I0103 20:32:51.604736   67249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:32:51.604756   67249 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0103 20:32:51.604774   67249 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.219 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-195281 NodeName:newest-cni-195281 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:32:51.604921   67249 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-195281"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:32:51.604998   67249 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-195281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:32:51.605063   67249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 20:32:51.614067   67249 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:32:51.614138   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:32:51.622881   67249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0103 20:32:51.639844   67249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 20:32:51.657148   67249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0103 20:32:51.673717   67249 ssh_runner.go:195] Run: grep 192.168.72.219	control-plane.minikube.internal$ /etc/hosts
	I0103 20:32:51.677731   67249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.219	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:32:51.691172   67249 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281 for IP: 192.168.72.219
	I0103 20:32:51.691216   67249 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:51.691406   67249 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:32:51.691466   67249 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:32:51.691555   67249 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/client.key
	I0103 20:32:51.691578   67249 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840
	I0103 20:32:51.691591   67249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840 with IP's: [192.168.72.219 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 20:32:51.819513   67249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840 ...
	I0103 20:32:51.819543   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840: {Name:mke6310b8f3a7f62097b99eb3014efd0dc20eee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:51.819753   67249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840 ...
	I0103 20:32:51.819775   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840: {Name:mk86f84e3544818fe75547ad73b8572d5ea7d5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:51.819889   67249 certs.go:337] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840 -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt
	I0103 20:32:51.819951   67249 certs.go:341] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840 -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key
	I0103 20:32:51.819998   67249 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key
	I0103 20:32:51.820011   67249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt with IP's: []
	I0103 20:32:52.091348   67249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt ...
	I0103 20:32:52.091389   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt: {Name:mk0bd3b5025560ca11106a8bacced64f41bc0bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:52.091598   67249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key ...
	I0103 20:32:52.091624   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key: {Name:mkb6394b7df36e99fa2b47f41fee526be70aa354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:52.091875   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:32:52.091916   67249 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:32:52.091924   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:32:52.091945   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:32:52.091968   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:32:52.092005   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:32:52.092084   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:32:52.092677   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:32:52.119326   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:32:52.144246   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:32:52.168845   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:32:52.193428   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:32:52.217391   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:32:52.241585   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:32:52.267288   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:32:52.292564   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:32:52.316091   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:32:52.339271   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:32:52.363053   67249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:32:52.379247   67249 ssh_runner.go:195] Run: openssl version
	I0103 20:32:52.385228   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:32:52.395301   67249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:32:52.400316   67249 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:32:52.400391   67249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:32:52.406648   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:32:52.417403   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:32:52.428037   67249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:32:52.433100   67249 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:32:52.433177   67249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:32:52.439099   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:32:52.449452   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:32:52.460722   67249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:32:52.465623   67249 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:32:52.465683   67249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:32:52.471232   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:32:52.481150   67249 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:32:52.485667   67249 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 20:32:52.485744   67249 kubeadm.go:404] StartCluster: {Name:newest-cni-195281 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:32:52.485826   67249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:32:52.485909   67249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:32:52.531498   67249 cri.go:89] found id: ""
	I0103 20:32:52.531561   67249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:32:52.540939   67249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:32:52.550366   67249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:32:52.561098   67249 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:32:52.561141   67249 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0103 20:32:52.688110   67249 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0103 20:32:52.688227   67249 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 20:32:52.982436   67249 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 20:32:52.982649   67249 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 20:32:52.982759   67249 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 20:32:53.224308   67249 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 20:32:53.374760   67249 out.go:204]   - Generating certificates and keys ...
	I0103 20:32:53.374889   67249 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 20:32:53.374992   67249 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 20:32:53.375097   67249 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 20:32:53.441111   67249 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 20:32:53.628208   67249 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 20:32:53.797130   67249 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 20:32:53.952777   67249 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 20:32:53.953156   67249 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-195281] and IPs [192.168.72.219 127.0.0.1 ::1]
	I0103 20:32:54.217335   67249 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 20:32:54.217519   67249 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-195281] and IPs [192.168.72.219 127.0.0.1 ::1]
	I0103 20:32:54.566407   67249 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 20:32:54.711625   67249 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 20:32:54.998510   67249 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 20:32:54.998854   67249 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 20:32:55.388836   67249 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 20:32:55.480482   67249 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0103 20:32:55.693814   67249 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 20:32:55.832458   67249 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 20:32:55.924416   67249 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 20:32:55.925246   67249 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 20:32:55.928467   67249 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 20:32:55.930672   67249 out.go:204]   - Booting up control plane ...
	I0103 20:32:55.930771   67249 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 20:32:55.930840   67249 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 20:32:55.930933   67249 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 20:32:55.948035   67249 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 20:32:55.949287   67249 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 20:32:55.949335   67249 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 20:32:56.085462   67249 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 20:13:01 UTC, ends at Wed 2024-01-03 20:33:02 UTC. --
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.495859635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313982495775905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=f6c65a4b-398d-430c-9bd3-bc5e9887679b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.496390306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d58ca517-0753-45b2-81ec-1654b32eace0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.496435651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d58ca517-0753-45b2-81ec-1654b32eace0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.496651529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704312859463688336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf18f2f2f6b890569cbe272741251b2382ba323933aa17c91e69ebe474026827,PodSandboxId:0fecf732af3d98284f07096a6c2154e8957b91166978fddea56d5eb53d42eb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312840492955145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a560811-1b16-4bbb-98e8-ceb54e9f8bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 50f62300,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a,PodSandboxId:8a53c0c544eaa90f4252f374271277142681ae680d6289fc7b7fdb1fecb3ee6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704312836616376390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rbx58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5e91e6a-e3f9-4dbc-83ff-3069cb67847c,},Annotations:map[string]string{io.kubernetes.container.hash: e0299e54,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1704312829187059443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8,PodSandboxId:11b51934a004f8813caad8f3a521040e3860a408abcaa2879a6b63f2e74666b6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704312829142220806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98fafdf5-9a7
4-4c9f-96eb-20064c72c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: a256ba75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893,PodSandboxId:55ec540cf0bb90a783e1d0e074b925e7c46ab2064403b516c384502e698f9b2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704312822704754636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6babebb750aaa2273bf
3c92e69b421d0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748,PodSandboxId:2abd877507e1eeb17bac598c6306ff7f3ac69dd4f20a886760fc5fcb935418bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704312822572964404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1444165caa04e38cec5c0c2f8cc303e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 67b65f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85,PodSandboxId:bde5c7ca363e1f689fd6148fa640fecaef8f66f4cb296a11287144da436c347b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704312822264978797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ead3d115a92e44f831043fbd0ae0d168,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b,PodSandboxId:abfbbc6cf5b80d8dbcd720a3f338646ccd615c6eddb0388aff645f2244d81145,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704312822104388070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74af6328771f18a2cc89e2cdf431801b,},Annotations:map[string
]string{io.kubernetes.container.hash: 8cf259b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d58ca517-0753-45b2-81ec-1654b32eace0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.543589878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3b58da53-961b-4c92-ac07-f24db8d926ae name=/runtime.v1.RuntimeService/Version
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.543694711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3b58da53-961b-4c92-ac07-f24db8d926ae name=/runtime.v1.RuntimeService/Version
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.545007256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c3cdb0ce-1088-4f0c-a494-4d38db0519d2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.545341506Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313982545329227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=c3cdb0ce-1088-4f0c-a494-4d38db0519d2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.546226923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f99327f9-b1a9-4896-9364-cd990409b2fb name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.546276387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f99327f9-b1a9-4896-9364-cd990409b2fb name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.546495959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704312859463688336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf18f2f2f6b890569cbe272741251b2382ba323933aa17c91e69ebe474026827,PodSandboxId:0fecf732af3d98284f07096a6c2154e8957b91166978fddea56d5eb53d42eb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312840492955145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a560811-1b16-4bbb-98e8-ceb54e9f8bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 50f62300,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a,PodSandboxId:8a53c0c544eaa90f4252f374271277142681ae680d6289fc7b7fdb1fecb3ee6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704312836616376390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rbx58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5e91e6a-e3f9-4dbc-83ff-3069cb67847c,},Annotations:map[string]string{io.kubernetes.container.hash: e0299e54,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1704312829187059443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8,PodSandboxId:11b51934a004f8813caad8f3a521040e3860a408abcaa2879a6b63f2e74666b6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704312829142220806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98fafdf5-9a7
4-4c9f-96eb-20064c72c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: a256ba75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893,PodSandboxId:55ec540cf0bb90a783e1d0e074b925e7c46ab2064403b516c384502e698f9b2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704312822704754636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6babebb750aaa2273bf
3c92e69b421d0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748,PodSandboxId:2abd877507e1eeb17bac598c6306ff7f3ac69dd4f20a886760fc5fcb935418bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704312822572964404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1444165caa04e38cec5c0c2f8cc303e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 67b65f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85,PodSandboxId:bde5c7ca363e1f689fd6148fa640fecaef8f66f4cb296a11287144da436c347b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704312822264978797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ead3d115a92e44f831043fbd0ae0d168,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b,PodSandboxId:abfbbc6cf5b80d8dbcd720a3f338646ccd615c6eddb0388aff645f2244d81145,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704312822104388070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74af6328771f18a2cc89e2cdf431801b,},Annotations:map[string
]string{io.kubernetes.container.hash: 8cf259b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f99327f9-b1a9-4896-9364-cd990409b2fb name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.587559206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b00ab9ec-831c-435e-9e14-aa35757f3612 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.587647942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b00ab9ec-831c-435e-9e14-aa35757f3612 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.588960020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d07039ba-788d-478f-9095-d88a49310240 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.589395260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313982589379853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d07039ba-788d-478f-9095-d88a49310240 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.590120369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=60d8d509-a537-43e4-bc9b-93f2587711ac name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.590193187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=60d8d509-a537-43e4-bc9b-93f2587711ac name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.590409111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704312859463688336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf18f2f2f6b890569cbe272741251b2382ba323933aa17c91e69ebe474026827,PodSandboxId:0fecf732af3d98284f07096a6c2154e8957b91166978fddea56d5eb53d42eb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312840492955145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a560811-1b16-4bbb-98e8-ceb54e9f8bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 50f62300,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a,PodSandboxId:8a53c0c544eaa90f4252f374271277142681ae680d6289fc7b7fdb1fecb3ee6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704312836616376390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rbx58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5e91e6a-e3f9-4dbc-83ff-3069cb67847c,},Annotations:map[string]string{io.kubernetes.container.hash: e0299e54,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1704312829187059443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8,PodSandboxId:11b51934a004f8813caad8f3a521040e3860a408abcaa2879a6b63f2e74666b6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704312829142220806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98fafdf5-9a7
4-4c9f-96eb-20064c72c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: a256ba75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893,PodSandboxId:55ec540cf0bb90a783e1d0e074b925e7c46ab2064403b516c384502e698f9b2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704312822704754636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6babebb750aaa2273bf
3c92e69b421d0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748,PodSandboxId:2abd877507e1eeb17bac598c6306ff7f3ac69dd4f20a886760fc5fcb935418bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704312822572964404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1444165caa04e38cec5c0c2f8cc303e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 67b65f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85,PodSandboxId:bde5c7ca363e1f689fd6148fa640fecaef8f66f4cb296a11287144da436c347b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704312822264978797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ead3d115a92e44f831043fbd0ae0d168,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b,PodSandboxId:abfbbc6cf5b80d8dbcd720a3f338646ccd615c6eddb0388aff645f2244d81145,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704312822104388070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74af6328771f18a2cc89e2cdf431801b,},Annotations:map[string
]string{io.kubernetes.container.hash: 8cf259b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=60d8d509-a537-43e4-bc9b-93f2587711ac name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.628963607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4f49e3aa-5534-4732-b726-46400359b655 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.629065947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4f49e3aa-5534-4732-b726-46400359b655 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.630528432Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=508319a0-c790-45d1-aa4f-cb2ed1750126 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.630968564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704313982630952181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=508319a0-c790-45d1-aa4f-cb2ed1750126 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.631680722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cdc757bd-0f9f-42a1-aa28-dee6e66ddad6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.631757670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cdc757bd-0f9f-42a1-aa28-dee6e66ddad6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:33:02 no-preload-749210 crio[715]: time="2024-01-03 20:33:02.632025164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704312859463688336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf18f2f2f6b890569cbe272741251b2382ba323933aa17c91e69ebe474026827,PodSandboxId:0fecf732af3d98284f07096a6c2154e8957b91166978fddea56d5eb53d42eb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312840492955145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a560811-1b16-4bbb-98e8-ceb54e9f8bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 50f62300,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a,PodSandboxId:8a53c0c544eaa90f4252f374271277142681ae680d6289fc7b7fdb1fecb3ee6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704312836616376390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rbx58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5e91e6a-e3f9-4dbc-83ff-3069cb67847c,},Annotations:map[string]string{io.kubernetes.container.hash: e0299e54,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d,PodSandboxId:5aa887d440d33227e21a77ca0bfecf128beb453149ee7d388729f58ad577fc91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1704312829187059443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1bf4f1d7-c083-47e7-9976-76bbc72e7bff,},Annotations:map[string]string{io.kubernetes.container.hash: b646abf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8,PodSandboxId:11b51934a004f8813caad8f3a521040e3860a408abcaa2879a6b63f2e74666b6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704312829142220806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98fafdf5-9a7
4-4c9f-96eb-20064c72c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: a256ba75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893,PodSandboxId:55ec540cf0bb90a783e1d0e074b925e7c46ab2064403b516c384502e698f9b2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704312822704754636,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6babebb750aaa2273bf
3c92e69b421d0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748,PodSandboxId:2abd877507e1eeb17bac598c6306ff7f3ac69dd4f20a886760fc5fcb935418bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704312822572964404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1444165caa04e38cec5c0c2f8cc303e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 67b65f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85,PodSandboxId:bde5c7ca363e1f689fd6148fa640fecaef8f66f4cb296a11287144da436c347b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704312822264978797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ead3d115a92e44f831043fbd0ae0d168,},Annotations
:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b,PodSandboxId:abfbbc6cf5b80d8dbcd720a3f338646ccd615c6eddb0388aff645f2244d81145,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704312822104388070,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-749210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74af6328771f18a2cc89e2cdf431801b,},Annotations:map[string
]string{io.kubernetes.container.hash: 8cf259b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cdc757bd-0f9f-42a1-aa28-dee6e66ddad6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08f95eed823c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   5aa887d440d33       storage-provisioner
	bf18f2f2f6b89       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   0fecf732af3d9       busybox
	b13d0a23b2b29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   8a53c0c544eaa       coredns-76f75df574-rbx58
	367b9549fe5f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   5aa887d440d33       storage-provisioner
	250be399ab1a0       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      19 minutes ago      Running             kube-proxy                1                   11b51934a004f       kube-proxy-5hwf4
	03433af76d74a       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      19 minutes ago      Running             kube-scheduler            1                   55ec540cf0bb9       kube-scheduler-no-preload-749210
	f7d2f606bd445       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      19 minutes ago      Running             etcd                      1                   2abd877507e1e       etcd-no-preload-749210
	67f470e7e603d       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      19 minutes ago      Running             kube-controller-manager   1                   bde5c7ca363e1       kube-controller-manager-no-preload-749210
	fb19a70526254       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      19 minutes ago      Running             kube-apiserver            1                   abfbbc6cf5b80       kube-apiserver-no-preload-749210
	
	
	==> coredns [b13d0a23b2b29e43671041dd6b0519d5cdc0ff78c47236c54585056c177f7f4a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55139 - 62607 "HINFO IN 9055025431400979744.4890078852502409788. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011418444s
	
	
	==> describe nodes <==
	Name:               no-preload-749210
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-749210
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=no-preload-749210
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_05_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:05:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-749210
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:33:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:29:35 +0000   Wed, 03 Jan 2024 20:05:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:29:35 +0000   Wed, 03 Jan 2024 20:05:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:29:35 +0000   Wed, 03 Jan 2024 20:05:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:29:35 +0000   Wed, 03 Jan 2024 20:13:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.245
	  Hostname:    no-preload-749210
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7606b028033543858e648631d2e3789f
	  System UUID:                7606b028-0335-4385-8e64-8631d2e3789f
	  Boot ID:                    e9109145-cffd-42f2-9675-c9d2c4d88f7b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 coredns-76f75df574-rbx58                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-no-preload-749210                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kube-apiserver-no-preload-749210             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-no-preload-749210    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-5hwf4                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-no-preload-749210             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 metrics-server-57f55c9bc5-tqn5m              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node no-preload-749210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node no-preload-749210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node no-preload-749210 status is now: NodeHasSufficientPID
	  Normal  NodeReady                27m                kubelet          Node no-preload-749210 status is now: NodeReady
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                node-controller  Node no-preload-749210 event: Registered Node no-preload-749210 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-749210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-749210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-749210 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-749210 event: Registered Node no-preload-749210 in Controller
	
	
	==> dmesg <==
	[Jan 3 20:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062282] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.393777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan 3 20:13] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134005] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.455542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.376044] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.123299] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.157223] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.126516] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.246965] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +29.915829] systemd-fstab-generator[1328]: Ignoring "noauto" for root device
	[ +15.023103] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [f7d2f606bd4457e00acad3e38e40d2a0eeba1830b665bd0b453061447cb63748] <==
	{"level":"info","ts":"2024-01-03T20:13:59.158357Z","caller":"traceutil/trace.go:171","msg":"trace[827098758] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-749210; range_end:; response_count:1; response_revision:566; }","duration":"464.289227ms","start":"2024-01-03T20:13:58.694053Z","end":"2024-01-03T20:13:59.158342Z","steps":["trace[827098758] 'range keys from in-memory index tree'  (duration: 463.99247ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.158467Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:58.694036Z","time spent":"464.42131ms","remote":"127.0.0.1:46830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":5633,"request content":"key:\"/registry/pods/kube-system/etcd-no-preload-749210\" "}
	{"level":"warn","ts":"2024-01-03T20:13:59.15824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"517.883325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.245\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-01-03T20:13:59.158578Z","caller":"traceutil/trace.go:171","msg":"trace[500221479] range","detail":"{range_begin:/registry/masterleases/192.168.61.245; range_end:; response_count:1; response_revision:566; }","duration":"518.220578ms","start":"2024-01-03T20:13:58.640339Z","end":"2024-01-03T20:13:59.15856Z","steps":["trace[500221479] 'range keys from in-memory index tree'  (duration: 517.706306ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.158616Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:58.640326Z","time spent":"518.28023ms","remote":"127.0.0.1:46796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":1,"response size":159,"request content":"key:\"/registry/masterleases/192.168.61.245\" "}
	{"level":"info","ts":"2024-01-03T20:13:59.323846Z","caller":"traceutil/trace.go:171","msg":"trace[1017916715] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"161.617836ms","start":"2024-01-03T20:13:59.162116Z","end":"2024-01-03T20:13:59.323733Z","steps":["trace[1017916715] 'read index received'  (duration: 161.504254ms)","trace[1017916715] 'applied index is now lower than readState.Index'  (duration: 112.845µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:13:59.324009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.896598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-749210\" ","response":"range_response_count:1 size:4441"}
	{"level":"info","ts":"2024-01-03T20:13:59.324053Z","caller":"traceutil/trace.go:171","msg":"trace[1176094791] range","detail":"{range_begin:/registry/minions/no-preload-749210; range_end:; response_count:1; response_revision:566; }","duration":"161.951792ms","start":"2024-01-03T20:13:59.162094Z","end":"2024-01-03T20:13:59.324046Z","steps":["trace[1176094791] 'agreement among raft nodes before linearized reading'  (duration: 161.865333ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.713561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.483423ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3441749369347487512 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.245\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/192.168.61.245\" value_size:67 lease:3441749369347487509 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.245\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-03T20:13:59.714181Z","caller":"traceutil/trace.go:171","msg":"trace[1947085449] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"386.877101ms","start":"2024-01-03T20:13:59.327195Z","end":"2024-01-03T20:13:59.714072Z","steps":["trace[1947085449] 'process raft request'  (duration: 129.436175ms)","trace[1947085449] 'compare'  (duration: 254.931617ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:13:59.714523Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.327179Z","time spent":"387.146444ms","remote":"127.0.0.1:46796","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.245\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/192.168.61.245\" value_size:67 lease:3441749369347487509 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.245\" > >"}
	{"level":"info","ts":"2024-01-03T20:13:59.720767Z","caller":"traceutil/trace.go:171","msg":"trace[1521015019] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:608; }","duration":"385.511712ms","start":"2024-01-03T20:13:59.328434Z","end":"2024-01-03T20:13:59.713946Z","steps":["trace[1521015019] 'read index received'  (duration: 128.136205ms)","trace[1521015019] 'applied index is now lower than readState.Index'  (duration: 257.373615ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:13:59.720369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"391.942232ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-749210\" ","response":"range_response_count:1 size:5609"}
	{"level":"info","ts":"2024-01-03T20:13:59.721422Z","caller":"traceutil/trace.go:171","msg":"trace[1348731087] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-749210; range_end:; response_count:1; response_revision:567; }","duration":"393.002433ms","start":"2024-01-03T20:13:59.328405Z","end":"2024-01-03T20:13:59.721407Z","steps":["trace[1348731087] 'agreement among raft nodes before linearized reading'  (duration: 391.809133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:13:59.721466Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.328394Z","time spent":"393.056542ms","remote":"127.0.0.1:46830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":5633,"request content":"key:\"/registry/pods/kube-system/etcd-no-preload-749210\" "}
	{"level":"warn","ts":"2024-01-03T20:13:59.721708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.687489ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2024-01-03T20:13:59.721775Z","caller":"traceutil/trace.go:171","msg":"trace[440588789] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:567; }","duration":"131.756775ms","start":"2024-01-03T20:13:59.590009Z","end":"2024-01-03T20:13:59.721766Z","steps":["trace[440588789] 'agreement among raft nodes before linearized reading'  (duration: 131.637673ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:14:00.099038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.824673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-01-03T20:14:00.099212Z","caller":"traceutil/trace.go:171","msg":"trace[526207413] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:567; }","duration":"285.256686ms","start":"2024-01-03T20:13:59.813936Z","end":"2024-01-03T20:14:00.099192Z","steps":["trace[526207413] 'range keys from in-memory index tree'  (duration: 284.720007ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T20:23:45.980302Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":829}
	{"level":"info","ts":"2024-01-03T20:23:45.983881Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":829,"took":"3.212788ms","hash":1439995688}
	{"level":"info","ts":"2024-01-03T20:23:45.983959Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1439995688,"revision":829,"compact-revision":-1}
	{"level":"info","ts":"2024-01-03T20:28:45.988147Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2024-01-03T20:28:45.990187Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1071,"took":"1.48849ms","hash":209509946}
	{"level":"info","ts":"2024-01-03T20:28:45.990271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":209509946,"revision":1071,"compact-revision":829}
	
	
	==> kernel <==
	 20:33:03 up 20 min,  0 users,  load average: 0.29, 0.23, 0.19
	Linux no-preload-749210 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fb19a705262540d0025443f8a9db8a3139cffa87d8fd8f412b12547818a91a8b] <==
	I0103 20:26:48.430592       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:28:47.430459       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:28:47.430994       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0103 20:28:48.431926       1 handler_proxy.go:93] no RequestInfo found in the context
	W0103 20:28:48.431951       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:28:48.432139       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	E0103 20:28:48.432179       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:28:48.432179       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:28:48.433367       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:29:48.433170       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:29:48.433448       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:29:48.433591       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:29:48.433574       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:29:48.433705       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:29:48.435641       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:31:48.434029       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:31:48.434431       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:31:48.434450       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:31:48.436493       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:31:48.436654       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:31:48.436662       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [67f470e7e603d5253da506a4ad4eacce339e32b382bb6ad5e981e6f0c40abb85] <==
	I0103 20:27:31.040713       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:28:00.662290       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:28:01.050669       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:28:30.667643       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:28:31.061039       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:29:00.674139       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:29:01.073478       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:29:30.679700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:29:31.084025       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:30:00.685076       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:30:01.094022       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:30:10.209031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="333.502µs"
	I0103 20:30:25.207496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="73.236µs"
	E0103 20:30:30.692017       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:30:31.103293       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:31:00.698917       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:31:01.127462       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:31:30.705066       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:31:31.136401       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:32:00.711109       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:32:01.150241       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:32:30.716540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:32:31.165411       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:33:00.723703       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:33:01.175526       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [250be399ab1a0c358e410293a6952aadd55d48a462f2edb9c6d0b560eb323cd8] <==
	I0103 20:13:49.412638       1 server_others.go:72] "Using iptables proxy"
	I0103 20:13:49.454641       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.245"]
	I0103 20:13:49.582896       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0103 20:13:49.582961       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 20:13:49.582990       1 server_others.go:168] "Using iptables Proxier"
	I0103 20:13:49.586341       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 20:13:49.586629       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0103 20:13:49.586893       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:49.588892       1 config.go:188] "Starting service config controller"
	I0103 20:13:49.591750       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 20:13:49.589310       1 config.go:97] "Starting endpoint slice config controller"
	I0103 20:13:49.591916       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 20:13:49.592056       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 20:13:49.590373       1 config.go:315] "Starting node config controller"
	I0103 20:13:49.592166       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 20:13:49.692952       1 shared_informer.go:318] Caches are synced for node config
	I0103 20:13:49.693077       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [03433af76d74a120ca843ab0d9d45d2a0808d9048bf656bff68d7b7371082893] <==
	I0103 20:13:44.700343       1 serving.go:380] Generated self-signed cert in-memory
	W0103 20:13:47.298040       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:13:47.298095       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:13:47.298109       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:13:47.298117       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:13:47.429589       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0103 20:13:47.429929       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:47.445288       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:13:47.451123       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:13:47.451198       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:13:47.451462       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:13:47.552507       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 20:13:01 UTC, ends at Wed 2024-01-03 20:33:03 UTC. --
	Jan 03 20:30:25 no-preload-749210 kubelet[1334]: E0103 20:30:25.192288    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:30:36 no-preload-749210 kubelet[1334]: E0103 20:30:36.191102    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:30:41 no-preload-749210 kubelet[1334]: E0103 20:30:41.212855    1334 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:30:41 no-preload-749210 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:30:41 no-preload-749210 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:30:41 no-preload-749210 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:30:49 no-preload-749210 kubelet[1334]: E0103 20:30:49.191766    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:31:03 no-preload-749210 kubelet[1334]: E0103 20:31:03.192189    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:31:16 no-preload-749210 kubelet[1334]: E0103 20:31:16.191634    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:31:30 no-preload-749210 kubelet[1334]: E0103 20:31:30.191687    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:31:41 no-preload-749210 kubelet[1334]: E0103 20:31:41.216145    1334 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:31:41 no-preload-749210 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:31:41 no-preload-749210 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:31:41 no-preload-749210 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:31:44 no-preload-749210 kubelet[1334]: E0103 20:31:44.191649    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:31:58 no-preload-749210 kubelet[1334]: E0103 20:31:58.190900    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:32:09 no-preload-749210 kubelet[1334]: E0103 20:32:09.192089    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:32:20 no-preload-749210 kubelet[1334]: E0103 20:32:20.192015    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:32:35 no-preload-749210 kubelet[1334]: E0103 20:32:35.192006    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:32:41 no-preload-749210 kubelet[1334]: E0103 20:32:41.214917    1334 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:32:41 no-preload-749210 kubelet[1334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:32:41 no-preload-749210 kubelet[1334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:32:41 no-preload-749210 kubelet[1334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:32:46 no-preload-749210 kubelet[1334]: E0103 20:32:46.190853    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	Jan 03 20:32:58 no-preload-749210 kubelet[1334]: E0103 20:32:58.192558    1334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tqn5m" podUID="8cc1dc91-fafb-4405-8820-a7f99ccbbb0c"
	
	
	==> storage-provisioner [08f95eed823c13190efb38f0b605b92442b8229f1fd1e3b9ab3f2d7fdf18c052] <==
	I0103 20:14:19.587994       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:14:19.600041       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:14:19.600093       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:14:37.004521       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:14:37.007084       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-749210_3a85c888-e7a3-4f6e-8df3-3e4fbcedf466!
	I0103 20:14:37.008071       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80492e63-5321-45f4-a1ba-064f0ee67d00", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-749210_3a85c888-e7a3-4f6e-8df3-3e4fbcedf466 became leader
	I0103 20:14:37.108986       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-749210_3a85c888-e7a3-4f6e-8df3-3e4fbcedf466!
	
	
	==> storage-provisioner [367b9549fe5f7c717d71d33d0fbf5559d0b671b4eec29201566aa8354781474d] <==
	I0103 20:13:49.358296       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0103 20:14:19.361429       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-749210 -n no-preload-749210
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-749210 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-tqn5m
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-749210 describe pod metrics-server-57f55c9bc5-tqn5m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-749210 describe pod metrics-server-57f55c9bc5-tqn5m: exit status 1 (73.971975ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-tqn5m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-749210 describe pod metrics-server-57f55c9bc5-tqn5m: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (346.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (432.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0103 20:27:27.748406   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:28:32.532322   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:29:07.102279   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 20:29:09.452695   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:29:21.012810   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:29:48.942597   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:30:48.654489   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:30:55.308283   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 20:31:30.039451   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:31:42.554434   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-03 20:34:36.952728119 +0000 UTC m=+5842.425305099
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-018788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-018788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.549µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-018788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-018788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-018788 logs -n 25: (1.248217107s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p bridge-719541                                       | bridge-719541                | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-350596 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:04 UTC |
	|         | disable-driver-mounts-350596                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:04 UTC | 03 Jan 24 20:06 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-927922        | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451331            | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-749210             | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018788  | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC |                     |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-927922             | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC | 03 Jan 24 20:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451331                 | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC | 03 Jan 24 20:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-749210                  | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018788       | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018788 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:18 UTC |
	|         | default-k8s-diff-port-018788                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-927922                              | old-k8s-version-927922       | jenkins | v1.32.0 | 03 Jan 24 20:32 UTC | 03 Jan 24 20:32 UTC |
	| start   | -p newest-cni-195281 --memory=2200 --alsologtostderr   | newest-cni-195281            | jenkins | v1.32.0 | 03 Jan 24 20:32 UTC | 03 Jan 24 20:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-749210                                   | no-preload-749210            | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC | 03 Jan 24 20:33 UTC |
	| addons  | enable metrics-server -p newest-cni-195281             | newest-cni-195281            | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC | 03 Jan 24 20:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-195281                                   | newest-cni-195281            | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-451331                                  | embed-certs-451331           | jenkins | v1.32.0 | 03 Jan 24 20:34 UTC | 03 Jan 24 20:34 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:32:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:32:19.309136   67249 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:32:19.309476   67249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:32:19.309490   67249 out.go:309] Setting ErrFile to fd 2...
	I0103 20:32:19.309497   67249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:32:19.309714   67249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:32:19.310342   67249 out.go:303] Setting JSON to false
	I0103 20:32:19.311306   67249 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8087,"bootTime":1704305853,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 20:32:19.311373   67249 start.go:138] virtualization: kvm guest
	I0103 20:32:19.314262   67249 out.go:177] * [newest-cni-195281] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 20:32:19.316078   67249 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:32:19.316020   67249 notify.go:220] Checking for updates...
	I0103 20:32:19.318020   67249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:32:19.319745   67249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:32:19.321476   67249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:32:19.323306   67249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 20:32:19.325247   67249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:32:19.327385   67249 config.go:182] Loaded profile config "default-k8s-diff-port-018788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:32:19.327493   67249 config.go:182] Loaded profile config "embed-certs-451331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:32:19.327621   67249 config.go:182] Loaded profile config "no-preload-749210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:32:19.327723   67249 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:32:19.368449   67249 out.go:177] * Using the kvm2 driver based on user configuration
	I0103 20:32:19.369981   67249 start.go:298] selected driver: kvm2
	I0103 20:32:19.369999   67249 start.go:902] validating driver "kvm2" against <nil>
	I0103 20:32:19.370010   67249 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:32:19.370814   67249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:32:19.370900   67249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 20:32:19.386697   67249 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 20:32:19.386765   67249 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0103 20:32:19.386794   67249 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0103 20:32:19.387069   67249 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0103 20:32:19.387130   67249 cni.go:84] Creating CNI manager for ""
	I0103 20:32:19.387146   67249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:32:19.387180   67249 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0103 20:32:19.387187   67249 start_flags.go:323] config:
	{Name:newest-cni-195281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:32:19.387359   67249 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:32:19.390156   67249 out.go:177] * Starting control plane node newest-cni-195281 in cluster newest-cni-195281
	I0103 20:32:19.391874   67249 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:32:19.391934   67249 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0103 20:32:19.391952   67249 cache.go:56] Caching tarball of preloaded images
	I0103 20:32:19.392059   67249 preload.go:174] Found /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 20:32:19.392071   67249 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0103 20:32:19.392191   67249 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json ...
	I0103 20:32:19.392208   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json: {Name:mk604433cce431aecc704e6ae9cbe8e69956f33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:19.392355   67249 start.go:365] acquiring machines lock for newest-cni-195281: {Name:mk43df5d7e9fef8aa5f3e5c539ca15bff35ae8cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0103 20:32:19.392390   67249 start.go:369] acquired machines lock for "newest-cni-195281" in 22.434µs
	I0103 20:32:19.392407   67249 start.go:93] Provisioning new machine with config: &{Name:newest-cni-195281 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:32:19.392486   67249 start.go:125] createHost starting for "" (driver="kvm2")
	I0103 20:32:19.394467   67249 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0103 20:32:19.394687   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:32:19.394745   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:32:19.410171   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0103 20:32:19.410720   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:32:19.411315   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:32:19.411339   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:32:19.411722   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:32:19.411889   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:19.412083   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:19.412262   67249 start.go:159] libmachine.API.Create for "newest-cni-195281" (driver="kvm2")
	I0103 20:32:19.412296   67249 client.go:168] LocalClient.Create starting
	I0103 20:32:19.412334   67249 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem
	I0103 20:32:19.412371   67249 main.go:141] libmachine: Decoding PEM data...
	I0103 20:32:19.412386   67249 main.go:141] libmachine: Parsing certificate...
	I0103 20:32:19.412440   67249 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem
	I0103 20:32:19.412472   67249 main.go:141] libmachine: Decoding PEM data...
	I0103 20:32:19.412486   67249 main.go:141] libmachine: Parsing certificate...
	I0103 20:32:19.412501   67249 main.go:141] libmachine: Running pre-create checks...
	I0103 20:32:19.412510   67249 main.go:141] libmachine: (newest-cni-195281) Calling .PreCreateCheck
	I0103 20:32:19.412860   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetConfigRaw
	I0103 20:32:19.413237   67249 main.go:141] libmachine: Creating machine...
	I0103 20:32:19.413252   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Create
	I0103 20:32:19.413368   67249 main.go:141] libmachine: (newest-cni-195281) Creating KVM machine...
	I0103 20:32:19.414780   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found existing default KVM network
	I0103 20:32:19.416065   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.415922   67271 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:55:bb} reservation:<nil>}
	I0103 20:32:19.417061   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.416867   67271 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e5:bd:db} reservation:<nil>}
	I0103 20:32:19.417786   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.417674   67271 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ae:17:ed} reservation:<nil>}
	I0103 20:32:19.418963   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.418888   67271 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027f800}
	I0103 20:32:19.425096   67249 main.go:141] libmachine: (newest-cni-195281) DBG | trying to create private KVM network mk-newest-cni-195281 192.168.72.0/24...
	I0103 20:32:19.509409   67249 main.go:141] libmachine: (newest-cni-195281) Setting up store path in /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281 ...
	I0103 20:32:19.509454   67249 main.go:141] libmachine: (newest-cni-195281) DBG | private KVM network mk-newest-cni-195281 192.168.72.0/24 created
	I0103 20:32:19.509473   67249 main.go:141] libmachine: (newest-cni-195281) Building disk image from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 20:32:19.509514   67249 main.go:141] libmachine: (newest-cni-195281) Downloading /home/jenkins/minikube-integration/17885-9609/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0103 20:32:19.509675   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.509290   67271 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:32:19.721072   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.720924   67271 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa...
	I0103 20:32:19.797041   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.796916   67271 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/newest-cni-195281.rawdisk...
	I0103 20:32:19.797066   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Writing magic tar header
	I0103 20:32:19.797080   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Writing SSH key tar header
	I0103 20:32:19.797089   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:19.797050   67271 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281 ...
	I0103 20:32:19.797185   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281
	I0103 20:32:19.797212   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281 (perms=drwx------)
	I0103 20:32:19.797223   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube/machines (perms=drwxr-xr-x)
	I0103 20:32:19.797237   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube/machines
	I0103 20:32:19.797270   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 20:32:19.797283   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17885-9609
	I0103 20:32:19.797291   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0103 20:32:19.797298   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home/jenkins
	I0103 20:32:19.797330   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609/.minikube (perms=drwxr-xr-x)
	I0103 20:32:19.797359   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Checking permissions on dir: /home
	I0103 20:32:19.797376   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration/17885-9609 (perms=drwxrwxr-x)
	I0103 20:32:19.797390   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Skipping /home - not owner
	I0103 20:32:19.797420   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0103 20:32:19.797443   67249 main.go:141] libmachine: (newest-cni-195281) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0103 20:32:19.797465   67249 main.go:141] libmachine: (newest-cni-195281) Creating domain...
	I0103 20:32:19.798661   67249 main.go:141] libmachine: (newest-cni-195281) define libvirt domain using xml: 
	I0103 20:32:19.798699   67249 main.go:141] libmachine: (newest-cni-195281) <domain type='kvm'>
	I0103 20:32:19.798733   67249 main.go:141] libmachine: (newest-cni-195281)   <name>newest-cni-195281</name>
	I0103 20:32:19.798765   67249 main.go:141] libmachine: (newest-cni-195281)   <memory unit='MiB'>2200</memory>
	I0103 20:32:19.798780   67249 main.go:141] libmachine: (newest-cni-195281)   <vcpu>2</vcpu>
	I0103 20:32:19.798790   67249 main.go:141] libmachine: (newest-cni-195281)   <features>
	I0103 20:32:19.798802   67249 main.go:141] libmachine: (newest-cni-195281)     <acpi/>
	I0103 20:32:19.798814   67249 main.go:141] libmachine: (newest-cni-195281)     <apic/>
	I0103 20:32:19.798826   67249 main.go:141] libmachine: (newest-cni-195281)     <pae/>
	I0103 20:32:19.798836   67249 main.go:141] libmachine: (newest-cni-195281)     
	I0103 20:32:19.798862   67249 main.go:141] libmachine: (newest-cni-195281)   </features>
	I0103 20:32:19.798981   67249 main.go:141] libmachine: (newest-cni-195281)   <cpu mode='host-passthrough'>
	I0103 20:32:19.799017   67249 main.go:141] libmachine: (newest-cni-195281)   
	I0103 20:32:19.799041   67249 main.go:141] libmachine: (newest-cni-195281)   </cpu>
	I0103 20:32:19.799055   67249 main.go:141] libmachine: (newest-cni-195281)   <os>
	I0103 20:32:19.799068   67249 main.go:141] libmachine: (newest-cni-195281)     <type>hvm</type>
	I0103 20:32:19.799083   67249 main.go:141] libmachine: (newest-cni-195281)     <boot dev='cdrom'/>
	I0103 20:32:19.799096   67249 main.go:141] libmachine: (newest-cni-195281)     <boot dev='hd'/>
	I0103 20:32:19.799111   67249 main.go:141] libmachine: (newest-cni-195281)     <bootmenu enable='no'/>
	I0103 20:32:19.799123   67249 main.go:141] libmachine: (newest-cni-195281)   </os>
	I0103 20:32:19.799136   67249 main.go:141] libmachine: (newest-cni-195281)   <devices>
	I0103 20:32:19.799152   67249 main.go:141] libmachine: (newest-cni-195281)     <disk type='file' device='cdrom'>
	I0103 20:32:19.799170   67249 main.go:141] libmachine: (newest-cni-195281)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/boot2docker.iso'/>
	I0103 20:32:19.799186   67249 main.go:141] libmachine: (newest-cni-195281)       <target dev='hdc' bus='scsi'/>
	I0103 20:32:19.799199   67249 main.go:141] libmachine: (newest-cni-195281)       <readonly/>
	I0103 20:32:19.799223   67249 main.go:141] libmachine: (newest-cni-195281)     </disk>
	I0103 20:32:19.799240   67249 main.go:141] libmachine: (newest-cni-195281)     <disk type='file' device='disk'>
	I0103 20:32:19.799264   67249 main.go:141] libmachine: (newest-cni-195281)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0103 20:32:19.799305   67249 main.go:141] libmachine: (newest-cni-195281)       <source file='/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/newest-cni-195281.rawdisk'/>
	I0103 20:32:19.799322   67249 main.go:141] libmachine: (newest-cni-195281)       <target dev='hda' bus='virtio'/>
	I0103 20:32:19.799333   67249 main.go:141] libmachine: (newest-cni-195281)     </disk>
	I0103 20:32:19.799344   67249 main.go:141] libmachine: (newest-cni-195281)     <interface type='network'>
	I0103 20:32:19.799357   67249 main.go:141] libmachine: (newest-cni-195281)       <source network='mk-newest-cni-195281'/>
	I0103 20:32:19.799371   67249 main.go:141] libmachine: (newest-cni-195281)       <model type='virtio'/>
	I0103 20:32:19.799383   67249 main.go:141] libmachine: (newest-cni-195281)     </interface>
	I0103 20:32:19.799397   67249 main.go:141] libmachine: (newest-cni-195281)     <interface type='network'>
	I0103 20:32:19.799409   67249 main.go:141] libmachine: (newest-cni-195281)       <source network='default'/>
	I0103 20:32:19.799423   67249 main.go:141] libmachine: (newest-cni-195281)       <model type='virtio'/>
	I0103 20:32:19.799436   67249 main.go:141] libmachine: (newest-cni-195281)     </interface>
	I0103 20:32:19.799451   67249 main.go:141] libmachine: (newest-cni-195281)     <serial type='pty'>
	I0103 20:32:19.799463   67249 main.go:141] libmachine: (newest-cni-195281)       <target port='0'/>
	I0103 20:32:19.799483   67249 main.go:141] libmachine: (newest-cni-195281)     </serial>
	I0103 20:32:19.799496   67249 main.go:141] libmachine: (newest-cni-195281)     <console type='pty'>
	I0103 20:32:19.799515   67249 main.go:141] libmachine: (newest-cni-195281)       <target type='serial' port='0'/>
	I0103 20:32:19.799534   67249 main.go:141] libmachine: (newest-cni-195281)     </console>
	I0103 20:32:19.799552   67249 main.go:141] libmachine: (newest-cni-195281)     <rng model='virtio'>
	I0103 20:32:19.799565   67249 main.go:141] libmachine: (newest-cni-195281)       <backend model='random'>/dev/random</backend>
	I0103 20:32:19.799580   67249 main.go:141] libmachine: (newest-cni-195281)     </rng>
	I0103 20:32:19.799592   67249 main.go:141] libmachine: (newest-cni-195281)     
	I0103 20:32:19.799605   67249 main.go:141] libmachine: (newest-cni-195281)     
	I0103 20:32:19.799614   67249 main.go:141] libmachine: (newest-cni-195281)   </devices>
	I0103 20:32:19.799626   67249 main.go:141] libmachine: (newest-cni-195281) </domain>
	I0103 20:32:19.799640   67249 main.go:141] libmachine: (newest-cni-195281) 
	I0103 20:32:19.803863   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:21:41:b4 in network default
	I0103 20:32:19.804577   67249 main.go:141] libmachine: (newest-cni-195281) Ensuring networks are active...
	I0103 20:32:19.804622   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:19.805388   67249 main.go:141] libmachine: (newest-cni-195281) Ensuring network default is active
	I0103 20:32:19.805848   67249 main.go:141] libmachine: (newest-cni-195281) Ensuring network mk-newest-cni-195281 is active
	I0103 20:32:19.806341   67249 main.go:141] libmachine: (newest-cni-195281) Getting domain xml...
	I0103 20:32:19.807082   67249 main.go:141] libmachine: (newest-cni-195281) Creating domain...
	I0103 20:32:21.132770   67249 main.go:141] libmachine: (newest-cni-195281) Waiting to get IP...
	I0103 20:32:21.134841   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:21.135341   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:21.135366   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:21.135310   67271 retry.go:31] will retry after 211.135104ms: waiting for machine to come up
	I0103 20:32:21.347666   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:21.348235   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:21.348261   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:21.348145   67271 retry.go:31] will retry after 323.28225ms: waiting for machine to come up
	I0103 20:32:21.672767   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:21.673311   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:21.673343   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:21.673263   67271 retry.go:31] will retry after 371.328166ms: waiting for machine to come up
	I0103 20:32:22.045877   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:22.046594   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:22.046630   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:22.046495   67271 retry.go:31] will retry after 424.478536ms: waiting for machine to come up
	I0103 20:32:22.472185   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:22.472629   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:22.472661   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:22.472550   67271 retry.go:31] will retry after 661.63112ms: waiting for machine to come up
	I0103 20:32:23.135501   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:23.135980   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:23.136011   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:23.135936   67271 retry.go:31] will retry after 627.099478ms: waiting for machine to come up
	I0103 20:32:23.764511   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:23.764964   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:23.764993   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:23.764917   67271 retry.go:31] will retry after 1.023643059s: waiting for machine to come up
	I0103 20:32:24.790457   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:24.791000   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:24.791033   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:24.790947   67271 retry.go:31] will retry after 1.372445622s: waiting for machine to come up
	I0103 20:32:26.165309   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:26.165782   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:26.165801   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:26.165734   67271 retry.go:31] will retry after 1.684754533s: waiting for machine to come up
	I0103 20:32:27.851684   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:27.852122   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:27.852160   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:27.852062   67271 retry.go:31] will retry after 1.693836467s: waiting for machine to come up
	I0103 20:32:29.547539   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:29.548051   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:29.548080   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:29.548006   67271 retry.go:31] will retry after 2.126952355s: waiting for machine to come up
	I0103 20:32:31.676576   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:31.677064   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:31.677093   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:31.677027   67271 retry.go:31] will retry after 3.435892014s: waiting for machine to come up
	I0103 20:32:35.114880   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:35.115371   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:35.115397   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:35.115298   67271 retry.go:31] will retry after 3.914788696s: waiting for machine to come up
	I0103 20:32:39.034444   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:39.034917   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find current IP address of domain newest-cni-195281 in network mk-newest-cni-195281
	I0103 20:32:39.034950   67249 main.go:141] libmachine: (newest-cni-195281) DBG | I0103 20:32:39.034872   67271 retry.go:31] will retry after 5.092646295s: waiting for machine to come up
	I0103 20:32:44.131872   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.132395   67249 main.go:141] libmachine: (newest-cni-195281) Found IP for machine: 192.168.72.219
	I0103 20:32:44.132428   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has current primary IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.132441   67249 main.go:141] libmachine: (newest-cni-195281) Reserving static IP address...
	I0103 20:32:44.132922   67249 main.go:141] libmachine: (newest-cni-195281) DBG | unable to find host DHCP lease matching {name: "newest-cni-195281", mac: "52:54:00:5a:49:af", ip: "192.168.72.219"} in network mk-newest-cni-195281
	I0103 20:32:44.216469   67249 main.go:141] libmachine: (newest-cni-195281) Reserved static IP address: 192.168.72.219
	I0103 20:32:44.216511   67249 main.go:141] libmachine: (newest-cni-195281) Waiting for SSH to be available...
	I0103 20:32:44.216522   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Getting to WaitForSSH function...
	I0103 20:32:44.219743   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.220136   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.220181   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.220352   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Using SSH client type: external
	I0103 20:32:44.220382   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Using SSH private key: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa (-rw-------)
	I0103 20:32:44.220427   67249 main.go:141] libmachine: (newest-cni-195281) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.219 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0103 20:32:44.220443   67249 main.go:141] libmachine: (newest-cni-195281) DBG | About to run SSH command:
	I0103 20:32:44.220472   67249 main.go:141] libmachine: (newest-cni-195281) DBG | exit 0
	I0103 20:32:44.358552   67249 main.go:141] libmachine: (newest-cni-195281) DBG | SSH cmd err, output: <nil>: 
	I0103 20:32:44.358866   67249 main.go:141] libmachine: (newest-cni-195281) KVM machine creation complete!
	I0103 20:32:44.359216   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetConfigRaw
	I0103 20:32:44.359752   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:44.359969   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:44.360227   67249 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0103 20:32:44.360257   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:32:44.361613   67249 main.go:141] libmachine: Detecting operating system of created instance...
	I0103 20:32:44.361632   67249 main.go:141] libmachine: Waiting for SSH to be available...
	I0103 20:32:44.361641   67249 main.go:141] libmachine: Getting to WaitForSSH function...
	I0103 20:32:44.361656   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.364691   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.365073   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.365109   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.365248   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.365445   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.365680   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.365808   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.365973   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.366604   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.366626   67249 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0103 20:32:44.493837   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:32:44.493867   67249 main.go:141] libmachine: Detecting the provisioner...
	I0103 20:32:44.493880   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.497161   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.497541   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.497601   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.497794   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.498003   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.498199   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.498363   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.498575   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.499018   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.499033   67249 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0103 20:32:44.623686   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0103 20:32:44.623771   67249 main.go:141] libmachine: found compatible host: buildroot
	I0103 20:32:44.623788   67249 main.go:141] libmachine: Provisioning with buildroot...
	I0103 20:32:44.623798   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:44.624047   67249 buildroot.go:166] provisioning hostname "newest-cni-195281"
	I0103 20:32:44.624075   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:44.624251   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.627016   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.627435   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.627469   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.627629   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.627818   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.627970   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.628153   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.628308   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.628628   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.628643   67249 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-195281 && echo "newest-cni-195281" | sudo tee /etc/hostname
	I0103 20:32:44.766387   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-195281
	
	I0103 20:32:44.766419   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.769605   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.770020   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.770063   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.770286   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:44.770478   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.770696   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:44.770855   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:44.771047   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:44.771391   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:44.771416   67249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-195281' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-195281/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-195281' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:32:44.906281   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:32:44.906308   67249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17885-9609/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-9609/.minikube}
	I0103 20:32:44.906343   67249 buildroot.go:174] setting up certificates
	I0103 20:32:44.906354   67249 provision.go:83] configureAuth start
	I0103 20:32:44.906370   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetMachineName
	I0103 20:32:44.906662   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:44.909425   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.909736   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.909763   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.909936   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:44.912539   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.913023   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:44.913051   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:44.913266   67249 provision.go:138] copyHostCerts
	I0103 20:32:44.913339   67249 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem, removing ...
	I0103 20:32:44.913361   67249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem
	I0103 20:32:44.913448   67249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/ca.pem (1078 bytes)
	I0103 20:32:44.913580   67249 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem, removing ...
	I0103 20:32:44.913592   67249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem
	I0103 20:32:44.913631   67249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/cert.pem (1123 bytes)
	I0103 20:32:44.913722   67249 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem, removing ...
	I0103 20:32:44.913732   67249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem
	I0103 20:32:44.913769   67249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-9609/.minikube/key.pem (1679 bytes)
	I0103 20:32:44.913851   67249 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem org=jenkins.newest-cni-195281 san=[192.168.72.219 192.168.72.219 localhost 127.0.0.1 minikube newest-cni-195281]
	I0103 20:32:45.098688   67249 provision.go:172] copyRemoteCerts
	I0103 20:32:45.098762   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:32:45.098793   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.101827   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.102181   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.102213   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.102468   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.102706   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.102868   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.103005   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:45.197407   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:32:45.221474   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0103 20:32:45.244138   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:32:45.268222   67249 provision.go:86] duration metric: configureAuth took 361.849849ms
	I0103 20:32:45.268253   67249 buildroot.go:189] setting minikube options for container-runtime
	I0103 20:32:45.268431   67249 config.go:182] Loaded profile config "newest-cni-195281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:32:45.268531   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.271603   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.272110   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.272146   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.272402   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.272676   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.272851   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.273015   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.273229   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:45.273571   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:45.273593   67249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:32:45.615676   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:32:45.615712   67249 main.go:141] libmachine: Checking connection to Docker...
	I0103 20:32:45.615725   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetURL
	I0103 20:32:45.617050   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Using libvirt version 6000000
	I0103 20:32:45.619845   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.620254   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.620287   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.620398   67249 main.go:141] libmachine: Docker is up and running!
	I0103 20:32:45.620418   67249 main.go:141] libmachine: Reticulating splines...
	I0103 20:32:45.620426   67249 client.go:171] LocalClient.Create took 26.208121017s
	I0103 20:32:45.620449   67249 start.go:167] duration metric: libmachine.API.Create for "newest-cni-195281" took 26.208190465s
	I0103 20:32:45.620456   67249 start.go:300] post-start starting for "newest-cni-195281" (driver="kvm2")
	I0103 20:32:45.620467   67249 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:32:45.620488   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.620753   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:32:45.620791   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.623465   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.623873   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.623902   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.624029   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.624213   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.624385   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.624523   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:45.718372   67249 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:32:45.722729   67249 info.go:137] Remote host: Buildroot 2021.02.12
	I0103 20:32:45.722762   67249 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/addons for local assets ...
	I0103 20:32:45.722864   67249 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-9609/.minikube/files for local assets ...
	I0103 20:32:45.722984   67249 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0103 20:32:45.723125   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:32:45.733617   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:32:45.757682   67249 start.go:303] post-start completed in 137.211001ms
	I0103 20:32:45.757749   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetConfigRaw
	I0103 20:32:45.758396   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:45.761402   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.761798   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.761832   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.762088   67249 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json ...
	I0103 20:32:45.762302   67249 start.go:128] duration metric: createHost completed in 26.369804551s
	I0103 20:32:45.762332   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.764911   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.765288   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.765321   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.765500   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.765694   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.765902   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.766060   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.766292   67249 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:45.766620   67249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I0103 20:32:45.766632   67249 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0103 20:32:45.895678   67249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704313965.882309318
	
	I0103 20:32:45.895711   67249 fix.go:206] guest clock: 1704313965.882309318
	I0103 20:32:45.895722   67249 fix.go:219] Guest: 2024-01-03 20:32:45.882309318 +0000 UTC Remote: 2024-01-03 20:32:45.762315613 +0000 UTC m=+26.509941419 (delta=119.993705ms)
	I0103 20:32:45.895748   67249 fix.go:190] guest clock delta is within tolerance: 119.993705ms
	I0103 20:32:45.895770   67249 start.go:83] releasing machines lock for "newest-cni-195281", held for 26.50335784s
	I0103 20:32:45.895801   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.896111   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:45.898979   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.899363   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.899413   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.899560   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.900114   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.900299   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:32:45.900417   67249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:32:45.900468   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.900602   67249 ssh_runner.go:195] Run: cat /version.json
	I0103 20:32:45.900633   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:32:45.903625   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.903655   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.904059   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.904096   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:45.904122   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.904142   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:45.904262   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.904374   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:32:45.904453   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.904522   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:32:45.904666   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.904708   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:32:45.904838   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:45.904893   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:32:46.030977   67249 ssh_runner.go:195] Run: systemctl --version
	I0103 20:32:46.037034   67249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:32:46.200079   67249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 20:32:46.206922   67249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 20:32:46.207016   67249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:32:46.223019   67249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:32:46.223047   67249 start.go:475] detecting cgroup driver to use...
	I0103 20:32:46.223127   67249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:32:46.239996   67249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:32:46.253612   67249 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:32:46.253699   67249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:32:46.267450   67249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:32:46.282771   67249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:32:46.393693   67249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:32:46.526478   67249 docker.go:219] disabling docker service ...
	I0103 20:32:46.526587   67249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:32:46.540410   67249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:32:46.552921   67249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:32:46.683462   67249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:32:46.805351   67249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:32:46.819457   67249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:32:46.836394   67249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:32:46.836464   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.845831   67249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:32:46.845925   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.855232   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.864892   67249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:46.873915   67249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:32:46.883629   67249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:32:46.892075   67249 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0103 20:32:46.892200   67249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0103 20:32:46.904374   67249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:32:46.913766   67249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:32:47.034679   67249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:32:47.216427   67249 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:32:47.216509   67249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:32:47.222160   67249 start.go:543] Will wait 60s for crictl version
	I0103 20:32:47.222235   67249 ssh_runner.go:195] Run: which crictl
	I0103 20:32:47.226110   67249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:32:47.268069   67249 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0103 20:32:47.268163   67249 ssh_runner.go:195] Run: crio --version
	I0103 20:32:47.317148   67249 ssh_runner.go:195] Run: crio --version
	I0103 20:32:47.365121   67249 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0103 20:32:47.366551   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetIP
	I0103 20:32:47.369708   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:47.369977   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:32:47.369997   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:32:47.370262   67249 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0103 20:32:47.374478   67249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:32:47.388565   67249 localpath.go:92] copying /home/jenkins/minikube-integration/17885-9609/.minikube/client.crt -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/client.crt
	I0103 20:32:47.388746   67249 localpath.go:117] copying /home/jenkins/minikube-integration/17885-9609/.minikube/client.key -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/client.key
	I0103 20:32:47.390765   67249 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0103 20:32:47.392153   67249 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 20:32:47.392217   67249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:32:47.427843   67249 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0103 20:32:47.427922   67249 ssh_runner.go:195] Run: which lz4
	I0103 20:32:47.431931   67249 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:32:47.436174   67249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:32:47.436209   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401795125 bytes)
	I0103 20:32:48.886506   67249 crio.go:444] Took 1.454620 seconds to copy over tarball
	I0103 20:32:48.886605   67249 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:32:51.425832   67249 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.539199724s)
	I0103 20:32:51.425868   67249 crio.go:451] Took 2.539326 seconds to extract the tarball
	I0103 20:32:51.425880   67249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:32:51.463537   67249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:32:51.542489   67249 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:32:51.542535   67249 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:32:51.542644   67249 ssh_runner.go:195] Run: crio config
	I0103 20:32:51.604708   67249 cni.go:84] Creating CNI manager for ""
	I0103 20:32:51.604736   67249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:32:51.604756   67249 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0103 20:32:51.604774   67249 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.219 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-195281 NodeName:newest-cni-195281 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:32:51.604921   67249 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-195281"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:32:51.604998   67249 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-195281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:32:51.605063   67249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 20:32:51.614067   67249 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:32:51.614138   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:32:51.622881   67249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0103 20:32:51.639844   67249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 20:32:51.657148   67249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0103 20:32:51.673717   67249 ssh_runner.go:195] Run: grep 192.168.72.219	control-plane.minikube.internal$ /etc/hosts
	I0103 20:32:51.677731   67249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.219	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:32:51.691172   67249 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281 for IP: 192.168.72.219
	I0103 20:32:51.691216   67249 certs.go:190] acquiring lock for shared ca certs: {Name:mkcbd6a6a2f3ee7625ecf4a1f72bb7f9689bd33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:51.691406   67249 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key
	I0103 20:32:51.691466   67249 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key
	I0103 20:32:51.691555   67249 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/client.key
	I0103 20:32:51.691578   67249 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840
	I0103 20:32:51.691591   67249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840 with IP's: [192.168.72.219 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 20:32:51.819513   67249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840 ...
	I0103 20:32:51.819543   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840: {Name:mke6310b8f3a7f62097b99eb3014efd0dc20eee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:51.819753   67249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840 ...
	I0103 20:32:51.819775   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840: {Name:mk86f84e3544818fe75547ad73b8572d5ea7d5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:51.819889   67249 certs.go:337] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt.67e26840 -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt
	I0103 20:32:51.819951   67249 certs.go:341] copying /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key.67e26840 -> /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key
	I0103 20:32:51.819998   67249 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key
	I0103 20:32:51.820011   67249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt with IP's: []
	I0103 20:32:52.091348   67249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt ...
	I0103 20:32:52.091389   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt: {Name:mk0bd3b5025560ca11106a8bacced64f41bc0bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:52.091598   67249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key ...
	I0103 20:32:52.091624   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key: {Name:mkb6394b7df36e99fa2b47f41fee526be70aa354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:32:52.091875   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem (1338 bytes)
	W0103 20:32:52.091916   67249 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0103 20:32:52.091924   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 20:32:52.091945   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:32:52.091968   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:32:52.092005   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/certs/home/jenkins/minikube-integration/17885-9609/.minikube/certs/key.pem (1679 bytes)
	I0103 20:32:52.092084   67249 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0103 20:32:52.092677   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:32:52.119326   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:32:52.144246   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:32:52.168845   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:32:52.193428   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:32:52.217391   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:32:52.241585   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:32:52.267288   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:32:52.292564   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:32:52.316091   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0103 20:32:52.339271   67249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0103 20:32:52.363053   67249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:32:52.379247   67249 ssh_runner.go:195] Run: openssl version
	I0103 20:32:52.385228   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:32:52.395301   67249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:32:52.400316   67249 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:58 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:32:52.400391   67249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:32:52.406648   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:32:52.417403   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0103 20:32:52.428037   67249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0103 20:32:52.433100   67249 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:07 /usr/share/ca-certificates/16795.pem
	I0103 20:32:52.433177   67249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0103 20:32:52.439099   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0103 20:32:52.449452   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0103 20:32:52.460722   67249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0103 20:32:52.465623   67249 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:07 /usr/share/ca-certificates/167952.pem
	I0103 20:32:52.465683   67249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0103 20:32:52.471232   67249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:32:52.481150   67249 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:32:52.485667   67249 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 20:32:52.485744   67249 kubeadm.go:404] StartCluster: {Name:newest-cni-195281 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-195281 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:32:52.485826   67249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:32:52.485909   67249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:32:52.531498   67249 cri.go:89] found id: ""
	I0103 20:32:52.531561   67249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:32:52.540939   67249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:32:52.550366   67249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:32:52.561098   67249 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:32:52.561141   67249 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0103 20:32:52.688110   67249 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0103 20:32:52.688227   67249 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 20:32:52.982436   67249 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 20:32:52.982649   67249 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 20:32:52.982759   67249 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 20:32:53.224308   67249 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 20:32:53.374760   67249 out.go:204]   - Generating certificates and keys ...
	I0103 20:32:53.374889   67249 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 20:32:53.374992   67249 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 20:32:53.375097   67249 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 20:32:53.441111   67249 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 20:32:53.628208   67249 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 20:32:53.797130   67249 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 20:32:53.952777   67249 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 20:32:53.953156   67249 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-195281] and IPs [192.168.72.219 127.0.0.1 ::1]
	I0103 20:32:54.217335   67249 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 20:32:54.217519   67249 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-195281] and IPs [192.168.72.219 127.0.0.1 ::1]
	I0103 20:32:54.566407   67249 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 20:32:54.711625   67249 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 20:32:54.998510   67249 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 20:32:54.998854   67249 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 20:32:55.388836   67249 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 20:32:55.480482   67249 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0103 20:32:55.693814   67249 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 20:32:55.832458   67249 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 20:32:55.924416   67249 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 20:32:55.925246   67249 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 20:32:55.928467   67249 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 20:32:55.930672   67249 out.go:204]   - Booting up control plane ...
	I0103 20:32:55.930771   67249 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 20:32:55.930840   67249 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 20:32:55.930933   67249 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 20:32:55.948035   67249 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 20:32:55.949287   67249 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 20:32:55.949335   67249 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 20:32:56.085462   67249 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 20:33:04.088972   67249 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003943 seconds
	I0103 20:33:04.109414   67249 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 20:33:04.127616   67249 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 20:33:04.668745   67249 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 20:33:04.668981   67249 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-195281 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 20:33:05.184119   67249 kubeadm.go:322] [bootstrap-token] Using token: 2cn0nj.lvw1854yz02ozc4e
	I0103 20:33:05.185662   67249 out.go:204]   - Configuring RBAC rules ...
	I0103 20:33:05.185785   67249 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 20:33:05.196688   67249 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 20:33:05.205501   67249 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 20:33:05.210178   67249 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 20:33:05.214606   67249 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 20:33:05.219096   67249 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 20:33:05.237231   67249 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 20:33:05.505466   67249 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 20:33:05.634282   67249 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 20:33:05.635368   67249 kubeadm.go:322] 
	I0103 20:33:05.635454   67249 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 20:33:05.635465   67249 kubeadm.go:322] 
	I0103 20:33:05.635574   67249 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 20:33:05.635615   67249 kubeadm.go:322] 
	I0103 20:33:05.635654   67249 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 20:33:05.635737   67249 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 20:33:05.635798   67249 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 20:33:05.635807   67249 kubeadm.go:322] 
	I0103 20:33:05.635897   67249 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0103 20:33:05.635911   67249 kubeadm.go:322] 
	I0103 20:33:05.635966   67249 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 20:33:05.635988   67249 kubeadm.go:322] 
	I0103 20:33:05.636075   67249 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 20:33:05.636163   67249 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 20:33:05.636267   67249 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 20:33:05.636281   67249 kubeadm.go:322] 
	I0103 20:33:05.636386   67249 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 20:33:05.636487   67249 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 20:33:05.636500   67249 kubeadm.go:322] 
	I0103 20:33:05.636618   67249 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2cn0nj.lvw1854yz02ozc4e \
	I0103 20:33:05.636787   67249 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 \
	I0103 20:33:05.636836   67249 kubeadm.go:322] 	--control-plane 
	I0103 20:33:05.636850   67249 kubeadm.go:322] 
	I0103 20:33:05.636969   67249 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 20:33:05.636981   67249 kubeadm.go:322] 
	I0103 20:33:05.637089   67249 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2cn0nj.lvw1854yz02ozc4e \
	I0103 20:33:05.637207   67249 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:abd7748e33dd825416f0452914584982da7041f4caa98027889459d3fee91b12 
	I0103 20:33:05.637736   67249 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 20:33:05.637759   67249 cni.go:84] Creating CNI manager for ""
	I0103 20:33:05.637766   67249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 20:33:05.639750   67249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 20:33:05.641373   67249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 20:33:05.691055   67249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 20:33:05.744358   67249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:33:05.744420   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:05.744430   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=newest-cni-195281 minikube.k8s.io/updated_at=2024_01_03T20_33_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:05.803640   67249 ops.go:34] apiserver oom_adj: -16
	I0103 20:33:06.019502   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:06.520397   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:07.019980   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:07.520416   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:08.019777   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:08.520608   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:09.020553   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:09.520149   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:10.020370   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:10.520393   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:11.020311   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:11.520514   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:12.020199   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:12.519615   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:13.020003   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:13.519798   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:14.020401   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:14.520399   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:15.019786   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:15.520225   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:16.020497   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:16.520261   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:17.019700   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:17.520507   67249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:33:17.660212   67249 kubeadm.go:1088] duration metric: took 11.915870696s to wait for elevateKubeSystemPrivileges.
	I0103 20:33:17.660247   67249 kubeadm.go:406] StartCluster complete in 25.174518906s
	I0103 20:33:17.660270   67249 settings.go:142] acquiring lock: {Name:mkd213c48538fa01cb82b417485055a8adbf5e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:33:17.660350   67249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 20:33:17.662283   67249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-9609/kubeconfig: {Name:mkbd4e6a8b39f5a4a43fb71671a7bbd8b1617cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:33:17.662580   67249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:33:17.662668   67249 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:33:17.662773   67249 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-195281"
	I0103 20:33:17.662798   67249 addons.go:237] Setting addon storage-provisioner=true in "newest-cni-195281"
	I0103 20:33:17.662815   67249 config.go:182] Loaded profile config "newest-cni-195281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:33:17.662855   67249 host.go:66] Checking if "newest-cni-195281" exists ...
	I0103 20:33:17.662870   67249 addons.go:69] Setting default-storageclass=true in profile "newest-cni-195281"
	I0103 20:33:17.662885   67249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-195281"
	I0103 20:33:17.663309   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.663352   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.663354   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.663396   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.679378   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0103 20:33:17.679381   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
	I0103 20:33:17.679756   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.679913   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.680300   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.680319   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.680437   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.680465   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.680725   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.680785   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.681141   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.681166   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.681335   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:33:17.684878   67249 addons.go:237] Setting addon default-storageclass=true in "newest-cni-195281"
	I0103 20:33:17.684929   67249 host.go:66] Checking if "newest-cni-195281" exists ...
	I0103 20:33:17.685322   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.685370   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.698698   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37813
	I0103 20:33:17.699206   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.699802   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.699833   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.700253   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.700494   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:33:17.702827   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:33:17.702897   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38757
	I0103 20:33:17.704909   67249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:33:17.703310   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.705444   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.706865   67249 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:33:17.706872   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.706878   67249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:33:17.706894   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:33:17.707346   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.707895   67249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:17.707927   67249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:17.710637   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:33:17.711043   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:33:17.711079   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:33:17.711194   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:33:17.711332   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:33:17.711429   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:33:17.711599   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:33:17.724354   67249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41197
	I0103 20:33:17.724813   67249 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:17.725271   67249 main.go:141] libmachine: Using API Version  1
	I0103 20:33:17.725297   67249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:17.725645   67249 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:17.725827   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:33:17.727646   67249 main.go:141] libmachine: (newest-cni-195281) Calling .DriverName
	I0103 20:33:17.727945   67249 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:33:17.727960   67249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:33:17.727975   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHHostname
	I0103 20:33:17.730967   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:33:17.731436   67249 main.go:141] libmachine: (newest-cni-195281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:49:af", ip: ""} in network mk-newest-cni-195281: {Iface:virbr3 ExpiryTime:2024-01-03 21:32:34 +0000 UTC Type:0 Mac:52:54:00:5a:49:af Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:newest-cni-195281 Clientid:01:52:54:00:5a:49:af}
	I0103 20:33:17.731455   67249 main.go:141] libmachine: (newest-cni-195281) DBG | domain newest-cni-195281 has defined IP address 192.168.72.219 and MAC address 52:54:00:5a:49:af in network mk-newest-cni-195281
	I0103 20:33:17.731609   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHPort
	I0103 20:33:17.731794   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHKeyPath
	I0103 20:33:17.731934   67249 main.go:141] libmachine: (newest-cni-195281) Calling .GetSSHUsername
	I0103 20:33:17.732074   67249 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/newest-cni-195281/id_rsa Username:docker}
	I0103 20:33:17.863402   67249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 20:33:17.899270   67249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:33:17.911084   67249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:33:18.198358   67249 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-195281" context rescaled to 1 replicas
	I0103 20:33:18.198407   67249 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:33:18.200361   67249 out.go:177] * Verifying Kubernetes components...
	I0103 20:33:18.201742   67249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:33:18.430854   67249 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0103 20:33:18.785127   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.785165   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.785198   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.785223   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.785539   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.785556   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.785568   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.785577   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.786232   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Closing plugin on server side
	I0103 20:33:18.786243   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.786263   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Closing plugin on server side
	I0103 20:33:18.786267   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.786294   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.786310   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.786325   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.786339   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.786621   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.786643   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.786641   67249 main.go:141] libmachine: (newest-cni-195281) DBG | Closing plugin on server side
	I0103 20:33:18.787350   67249 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:33:18.787409   67249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:33:18.809599   67249 api_server.go:72] duration metric: took 611.153897ms to wait for apiserver process to appear ...
	I0103 20:33:18.809631   67249 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:33:18.809654   67249 api_server.go:253] Checking apiserver healthz at https://192.168.72.219:8443/healthz ...
	I0103 20:33:18.815444   67249 main.go:141] libmachine: Making call to close driver server
	I0103 20:33:18.815470   67249 main.go:141] libmachine: (newest-cni-195281) Calling .Close
	I0103 20:33:18.815776   67249 main.go:141] libmachine: Successfully made call to close driver server
	I0103 20:33:18.815798   67249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0103 20:33:18.817627   67249 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 20:33:18.818945   67249 addons.go:508] enable addons completed in 1.156282938s: enabled=[storage-provisioner default-storageclass]
	I0103 20:33:18.824023   67249 api_server.go:279] https://192.168.72.219:8443/healthz returned 200:
	ok
	I0103 20:33:18.826233   67249 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 20:33:18.826262   67249 api_server.go:131] duration metric: took 16.623947ms to wait for apiserver health ...
	I0103 20:33:18.826273   67249 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:33:18.841277   67249 system_pods.go:59] 8 kube-system pods found
	I0103 20:33:18.841313   67249 system_pods.go:61] "coredns-76f75df574-74kf4" [c77d0e4f-8516-4a88-a37e-741daac7540e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:33:18.841325   67249 system_pods.go:61] "coredns-76f75df574-wxv97" [a316894f-a5ed-4aac-83c0-de2a37c3680f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 20:33:18.841334   67249 system_pods.go:61] "etcd-newest-cni-195281" [b025aa55-b0ac-48be-8238-5a1d512f4889] Running
	I0103 20:33:18.841340   67249 system_pods.go:61] "kube-apiserver-newest-cni-195281" [15d8768e-a11c-47f5-b820-973868ed880e] Running
	I0103 20:33:18.841346   67249 system_pods.go:61] "kube-controller-manager-newest-cni-195281" [2b9ff8b8-1800-4a98-84f9-0fb99f2a7d75] Running
	I0103 20:33:18.841353   67249 system_pods.go:61] "kube-proxy-m55j5" [d9a647a9-c868-4b74-ab53-88628c2883b1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 20:33:18.841361   67249 system_pods.go:61] "kube-scheduler-newest-cni-195281" [cdfab88d-73de-4929-b45c-cf517a7d9000] Running
	I0103 20:33:18.841368   67249 system_pods.go:61] "storage-provisioner" [f110f04e-58e2-438f-8db6-615c277d7266] Pending
	I0103 20:33:18.841378   67249 system_pods.go:74] duration metric: took 15.098187ms to wait for pod list to return data ...
	I0103 20:33:18.841392   67249 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:33:18.846938   67249 default_sa.go:45] found service account: "default"
	I0103 20:33:18.846966   67249 default_sa.go:55] duration metric: took 5.564322ms for default service account to be created ...
	I0103 20:33:18.846978   67249 kubeadm.go:581] duration metric: took 648.541157ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0103 20:33:18.846998   67249 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:33:18.850826   67249 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0103 20:33:18.850856   67249 node_conditions.go:123] node cpu capacity is 2
	I0103 20:33:18.850868   67249 node_conditions.go:105] duration metric: took 3.865295ms to run NodePressure ...
	I0103 20:33:18.850881   67249 start.go:228] waiting for startup goroutines ...
	I0103 20:33:18.850889   67249 start.go:233] waiting for cluster config update ...
	I0103 20:33:18.850901   67249 start.go:242] writing updated cluster config ...
	I0103 20:33:18.851174   67249 ssh_runner.go:195] Run: rm -f paused
	I0103 20:33:18.906368   67249 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0103 20:33:18.908596   67249 out.go:177] * Done! kubectl is now configured to use "newest-cni-195281" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-03 20:13:21 UTC, ends at Wed 2024-01-03 20:34:37 UTC. --
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.650989929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704314077650978042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3b41a38f-fec3-4abc-93f9-583def1bb3b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.651566412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ab4f30b-a2a7-4321-8755-0cc1f0dcb773 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.651614083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ab4f30b-a2a7-4321-8755-0cc1f0dcb773 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.651807196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312873014332423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c392fb14a91e9f4a6643252d5dfac2e1c164e9206980da27ef53a85db6c130d1,PodSandboxId:baccf7a16fdfeb12fcac098e455733f670ea9f2b569244440ea0b56862308b6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312848149211593,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfdaacfb-b339-488d-968b-537870733563,},Annotations:map[string]string{io.kubernetes.container.hash: 31b9b4da,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06,PodSandboxId:56ca6ee8a63f137f2292a05567f59fb92b958a01dcda968d2dbdbafaf2508be9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312845035625348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zxzqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d066762e-7e1f-4b3a-9b21-6a7a3ca53edd,},Annotations:map[string]string{io.kubernetes.container.hash: a758356f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312840086461478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032,PodSandboxId:042f1c9914efd103d02790491b12b041d9d6cbf9db26cda3fda0bf0ece589ea5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312840119281646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
e5a1b04-4bce-4111-bfe8-2adb2f947d78,},Annotations:map[string]string{io.kubernetes.container.hash: f4f4cb38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c,PodSandboxId:9f7b2686f78ddceb890ed734bc51b694db7a26c7a3bf42bfc886fee3a075b9ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312830458531175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 303ebd0fe046fe6897895a41da889b48,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d,PodSandboxId:12f7cedbe223b2e50b1a66b12ed22ca457c8fd6662f93528652b9057ada4433f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312830383378086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4aa49e06c8498ad02035a6a3c854470,},An
notations:map[string]string{io.kubernetes.container.hash: d09eccde,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b,PodSandboxId:f0c80a0255d704e395ebdab78a059b1716a87371444af6e50a4ec1b42ec3ae0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312829916887775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f53e8f2639e05aaf76598b82d388a7f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc,PodSandboxId:16b3c8945f86cea9f3be3272d2381a6e4e036988c3e66976cad2be3ccff0ff8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312829748694586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
1c440e3088352f1d026b9319d0fd133,},Annotations:map[string]string{io.kubernetes.container.hash: a6c6c5d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ab4f30b-a2a7-4321-8755-0cc1f0dcb773 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.694702211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=23e97e24-ac04-4063-9d8e-b0dce0950746 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.694794737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=23e97e24-ac04-4063-9d8e-b0dce0950746 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.696156615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6321b25e-4f72-4bb2-986c-4cb9407633e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.696605515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704314077696590294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6321b25e-4f72-4bb2-986c-4cb9407633e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.697317848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c557b405-381c-41a2-9a85-3d9a95b60cad name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.697498737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c557b405-381c-41a2-9a85-3d9a95b60cad name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.697802128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312873014332423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c392fb14a91e9f4a6643252d5dfac2e1c164e9206980da27ef53a85db6c130d1,PodSandboxId:baccf7a16fdfeb12fcac098e455733f670ea9f2b569244440ea0b56862308b6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312848149211593,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfdaacfb-b339-488d-968b-537870733563,},Annotations:map[string]string{io.kubernetes.container.hash: 31b9b4da,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06,PodSandboxId:56ca6ee8a63f137f2292a05567f59fb92b958a01dcda968d2dbdbafaf2508be9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312845035625348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zxzqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d066762e-7e1f-4b3a-9b21-6a7a3ca53edd,},Annotations:map[string]string{io.kubernetes.container.hash: a758356f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312840086461478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032,PodSandboxId:042f1c9914efd103d02790491b12b041d9d6cbf9db26cda3fda0bf0ece589ea5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312840119281646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
e5a1b04-4bce-4111-bfe8-2adb2f947d78,},Annotations:map[string]string{io.kubernetes.container.hash: f4f4cb38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c,PodSandboxId:9f7b2686f78ddceb890ed734bc51b694db7a26c7a3bf42bfc886fee3a075b9ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312830458531175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 303ebd0fe046fe6897895a41da889b48,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d,PodSandboxId:12f7cedbe223b2e50b1a66b12ed22ca457c8fd6662f93528652b9057ada4433f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312830383378086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4aa49e06c8498ad02035a6a3c854470,},An
notations:map[string]string{io.kubernetes.container.hash: d09eccde,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b,PodSandboxId:f0c80a0255d704e395ebdab78a059b1716a87371444af6e50a4ec1b42ec3ae0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312829916887775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f53e8f2639e05aaf76598b82d388a7f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc,PodSandboxId:16b3c8945f86cea9f3be3272d2381a6e4e036988c3e66976cad2be3ccff0ff8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312829748694586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
1c440e3088352f1d026b9319d0fd133,},Annotations:map[string]string{io.kubernetes.container.hash: a6c6c5d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c557b405-381c-41a2-9a85-3d9a95b60cad name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.736357535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a0914d45-dfd2-4cf6-966f-82867c7906b2 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.736480282Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a0914d45-dfd2-4cf6-966f-82867c7906b2 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.738588917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=400e33f6-8011-4897-953e-515b219e3add name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.738975776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704314077738962910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=400e33f6-8011-4897-953e-515b219e3add name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.739823493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=711e2b9b-6870-4e6f-a229-e0ec14834844 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.739896884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=711e2b9b-6870-4e6f-a229-e0ec14834844 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.740167352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312873014332423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c392fb14a91e9f4a6643252d5dfac2e1c164e9206980da27ef53a85db6c130d1,PodSandboxId:baccf7a16fdfeb12fcac098e455733f670ea9f2b569244440ea0b56862308b6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312848149211593,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfdaacfb-b339-488d-968b-537870733563,},Annotations:map[string]string{io.kubernetes.container.hash: 31b9b4da,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06,PodSandboxId:56ca6ee8a63f137f2292a05567f59fb92b958a01dcda968d2dbdbafaf2508be9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312845035625348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zxzqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d066762e-7e1f-4b3a-9b21-6a7a3ca53edd,},Annotations:map[string]string{io.kubernetes.container.hash: a758356f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312840086461478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032,PodSandboxId:042f1c9914efd103d02790491b12b041d9d6cbf9db26cda3fda0bf0ece589ea5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312840119281646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
e5a1b04-4bce-4111-bfe8-2adb2f947d78,},Annotations:map[string]string{io.kubernetes.container.hash: f4f4cb38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c,PodSandboxId:9f7b2686f78ddceb890ed734bc51b694db7a26c7a3bf42bfc886fee3a075b9ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312830458531175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 303ebd0fe046fe6897895a41da889b48,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d,PodSandboxId:12f7cedbe223b2e50b1a66b12ed22ca457c8fd6662f93528652b9057ada4433f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312830383378086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4aa49e06c8498ad02035a6a3c854470,},An
notations:map[string]string{io.kubernetes.container.hash: d09eccde,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b,PodSandboxId:f0c80a0255d704e395ebdab78a059b1716a87371444af6e50a4ec1b42ec3ae0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312829916887775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f53e8f2639e05aaf76598b82d388a7f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc,PodSandboxId:16b3c8945f86cea9f3be3272d2381a6e4e036988c3e66976cad2be3ccff0ff8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312829748694586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
1c440e3088352f1d026b9319d0fd133,},Annotations:map[string]string{io.kubernetes.container.hash: a6c6c5d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=711e2b9b-6870-4e6f-a229-e0ec14834844 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.775451009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d1c17710-245f-47ab-a6d2-ca3147bd2a82 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.775537787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d1c17710-245f-47ab-a6d2-ca3147bd2a82 name=/runtime.v1.RuntimeService/Version
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.776663916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5b9a820c-29b1-4be3-a612-3a6312f91a09 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.777188838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704314077777174211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5b9a820c-29b1-4be3-a612-3a6312f91a09 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.777713902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=69b58b6a-6215-4813-9991-7128423092d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.777779482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=69b58b6a-6215-4813-9991-7128423092d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 03 20:34:37 default-k8s-diff-port-018788 crio[725]: time="2024-01-03 20:34:37.777977847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704312873014332423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c392fb14a91e9f4a6643252d5dfac2e1c164e9206980da27ef53a85db6c130d1,PodSandboxId:baccf7a16fdfeb12fcac098e455733f670ea9f2b569244440ea0b56862308b6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704312848149211593,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfdaacfb-b339-488d-968b-537870733563,},Annotations:map[string]string{io.kubernetes.container.hash: 31b9b4da,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06,PodSandboxId:56ca6ee8a63f137f2292a05567f59fb92b958a01dcda968d2dbdbafaf2508be9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704312845035625348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zxzqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d066762e-7e1f-4b3a-9b21-6a7a3ca53edd,},Annotations:map[string]string{io.kubernetes.container.hash: a758356f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f,PodSandboxId:be6527d03445d6fa58d54394ffd39658d656ac72a22c336705a251baa7a9fcbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704312840086461478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ef3511cb-5587-4ea5-86b6-d52cc5afb226,},Annotations:map[string]string{io.kubernetes.container.hash: 68c028bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032,PodSandboxId:042f1c9914efd103d02790491b12b041d9d6cbf9db26cda3fda0bf0ece589ea5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704312840119281646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
e5a1b04-4bce-4111-bfe8-2adb2f947d78,},Annotations:map[string]string{io.kubernetes.container.hash: f4f4cb38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c,PodSandboxId:9f7b2686f78ddceb890ed734bc51b694db7a26c7a3bf42bfc886fee3a075b9ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704312830458531175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 303ebd0fe046fe6897895a41da889b48,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d,PodSandboxId:12f7cedbe223b2e50b1a66b12ed22ca457c8fd6662f93528652b9057ada4433f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704312830383378086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4aa49e06c8498ad02035a6a3c854470,},An
notations:map[string]string{io.kubernetes.container.hash: d09eccde,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b,PodSandboxId:f0c80a0255d704e395ebdab78a059b1716a87371444af6e50a4ec1b42ec3ae0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704312829916887775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f53e8f2639e05aaf76598b82d388a7f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc,PodSandboxId:16b3c8945f86cea9f3be3272d2381a6e4e036988c3e66976cad2be3ccff0ff8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704312829748694586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
1c440e3088352f1d026b9319d0fd133,},Annotations:map[string]string{io.kubernetes.container.hash: a6c6c5d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=69b58b6a-6215-4813-9991-7128423092d2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d1fa8b05cd7c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   be6527d03445d       storage-provisioner
	c392fb14a91e9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   baccf7a16fdfe       busybox
	e2370f79911fd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   56ca6ee8a63f1       coredns-5dd5756b68-zxzqg
	b1525243614b0       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      20 minutes ago      Running             kube-proxy                1                   042f1c9914efd       kube-proxy-wqjlv
	365147e198ba5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   be6527d03445d       storage-provisioner
	abbaa7d1ca858       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      20 minutes ago      Running             kube-scheduler            1                   9f7b2686f78dd       kube-scheduler-default-k8s-diff-port-018788
	3bacdb6bf6624       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      20 minutes ago      Running             etcd                      1                   12f7cedbe223b       etcd-default-k8s-diff-port-018788
	2b7de3342fdb5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      20 minutes ago      Running             kube-controller-manager   1                   f0c80a0255d70       kube-controller-manager-default-k8s-diff-port-018788
	ce56b3ad3d4b7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      20 minutes ago      Running             kube-apiserver            1                   16b3c8945f86c       kube-apiserver-default-k8s-diff-port-018788
	
	
	==> coredns [e2370f79911fd2108cab00f1fb2d4c8f16fadefbef4d6ee135f875edd865be06] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35653 - 12446 "HINFO IN 3385961418125871742.7974406874081081189. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010827111s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-018788
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-018788
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=default-k8s-diff-port-018788
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_05_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:05:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-018788
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:34:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:29:44 +0000   Wed, 03 Jan 2024 20:05:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:29:44 +0000   Wed, 03 Jan 2024 20:05:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:29:44 +0000   Wed, 03 Jan 2024 20:05:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:29:44 +0000   Wed, 03 Jan 2024 20:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    default-k8s-diff-port-018788
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ba1e9c471d0427a84d508ddb34683ca
	  System UUID:                8ba1e9c4-71d0-427a-84d5-08ddb34683ca
	  Boot ID:                    8385a80b-b061-486f-9fd0-c93e71e2403d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-zxzqg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-018788                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-018788             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-018788    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-wqjlv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-018788             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-pgbbj                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-018788 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-018788 event: Registered Node default-k8s-diff-port-018788 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-018788 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-018788 event: Registered Node default-k8s-diff-port-018788 in Controller
	
	
	==> dmesg <==
	[Jan 3 20:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.666113] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.054375] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.130012] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000009] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.401020] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000080] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.643595] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.101420] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.157078] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.123171] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.235849] systemd-fstab-generator[709]: Ignoring "noauto" for root device
	[ +17.156848] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Jan 3 20:14] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [3bacdb6bf662499f958fae41c1520a7f963ecec63e66bca7076854532316bd9d] <==
	{"level":"warn","ts":"2024-01-03T20:14:00.097299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.432294ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15793434297366553493 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" mod_revision:561 > success:<request_put:<key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" value_size:729 lease:6570062260511777395 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-03T20:14:00.098408Z","caller":"traceutil/trace.go:171","msg":"trace[1027757226] linearizableReadLoop","detail":"{readStateIndex:604; appliedIndex:603; }","duration":"516.697306ms","start":"2024-01-03T20:13:59.581694Z","end":"2024-01-03T20:14:00.098391Z","steps":["trace[1027757226] 'read index received'  (duration: 181.792383ms)","trace[1027757226] 'applied index is now lower than readState.Index'  (duration: 334.903556ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T20:14:00.098586Z","caller":"traceutil/trace.go:171","msg":"trace[238564223] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"518.806985ms","start":"2024-01-03T20:13:59.57977Z","end":"2024-01-03T20:14:00.098577Z","steps":["trace[238564223] 'process raft request'  (duration: 183.851432ms)","trace[238564223] 'compare'  (duration: 330.400221ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:14:00.098669Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.579757Z","time spent":"518.871861ms","remote":"127.0.0.1:39160","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":817,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" mod_revision:561 > success:<request_put:<key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" value_size:729 lease:6570062260511777395 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-5dd5756b68-zxzqg.17a6ef7ec7aff220\" > >"}
	{"level":"warn","ts":"2024-01-03T20:14:00.098922Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"517.234497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-018788\" ","response":"range_response_count:1 size:6780"}
	{"level":"info","ts":"2024-01-03T20:14:00.098986Z","caller":"traceutil/trace.go:171","msg":"trace[1159610439] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-018788; range_end:; response_count:1; response_revision:566; }","duration":"517.302039ms","start":"2024-01-03T20:13:59.581676Z","end":"2024-01-03T20:14:00.098978Z","steps":["trace[1159610439] 'agreement among raft nodes before linearized reading'  (duration: 517.168573ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:14:00.099026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.581666Z","time spent":"517.35421ms","remote":"127.0.0.1:39184","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":6802,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-018788\" "}
	{"level":"warn","ts":"2024-01-03T20:14:00.099317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"511.029754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ephemeral-volume-controller\" ","response":"range_response_count:1 size:220"}
	{"level":"info","ts":"2024-01-03T20:14:00.09974Z","caller":"traceutil/trace.go:171","msg":"trace[1912603648] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ephemeral-volume-controller; range_end:; response_count:1; response_revision:566; }","duration":"511.448791ms","start":"2024-01-03T20:13:59.588279Z","end":"2024-01-03T20:14:00.099728Z","steps":["trace[1912603648] 'agreement among raft nodes before linearized reading'  (duration: 510.99958ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T20:14:00.100011Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-03T20:13:59.588268Z","time spent":"511.658757ms","remote":"127.0.0.1:39188","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":242,"request content":"key:\"/registry/serviceaccounts/kube-system/ephemeral-volume-controller\" "}
	{"level":"info","ts":"2024-01-03T20:23:54.130271Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":856}
	{"level":"info","ts":"2024-01-03T20:23:54.133806Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":856,"took":"2.551329ms","hash":3697743643}
	{"level":"info","ts":"2024-01-03T20:23:54.133913Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3697743643,"revision":856,"compact-revision":-1}
	{"level":"info","ts":"2024-01-03T20:28:54.138243Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1098}
	{"level":"info","ts":"2024-01-03T20:28:54.140567Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1098,"took":"1.644362ms","hash":1524117458}
	{"level":"info","ts":"2024-01-03T20:28:54.140732Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1524117458,"revision":1098,"compact-revision":856}
	{"level":"info","ts":"2024-01-03T20:32:52.888411Z","caller":"traceutil/trace.go:171","msg":"trace[1025963297] linearizableReadLoop","detail":"{readStateIndex:1806; appliedIndex:1805; }","duration":"122.034364ms","start":"2024-01-03T20:32:52.766323Z","end":"2024-01-03T20:32:52.888358Z","steps":["trace[1025963297] 'read index received'  (duration: 121.731839ms)","trace[1025963297] 'applied index is now lower than readState.Index'  (duration: 301.474µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:32:52.888757Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.441203ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-01-03T20:32:52.889183Z","caller":"traceutil/trace.go:171","msg":"trace[1047662924] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1534; }","duration":"122.980337ms","start":"2024-01-03T20:32:52.766183Z","end":"2024-01-03T20:32:52.889163Z","steps":["trace[1047662924] 'agreement among raft nodes before linearized reading'  (duration: 122.292165ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T20:32:53.159526Z","caller":"traceutil/trace.go:171","msg":"trace[1041693862] transaction","detail":"{read_only:false; response_revision:1535; number_of_response:1; }","duration":"262.707368ms","start":"2024-01-03T20:32:52.896787Z","end":"2024-01-03T20:32:53.159495Z","steps":["trace[1041693862] 'process raft request'  (duration: 191.054826ms)","trace[1041693862] 'compare'  (duration: 71.512314ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T20:32:53.40669Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.868032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-03T20:32:53.406926Z","caller":"traceutil/trace.go:171","msg":"trace[1351632780] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1535; }","duration":"137.104014ms","start":"2024-01-03T20:32:53.269795Z","end":"2024-01-03T20:32:53.406899Z","steps":["trace[1351632780] 'range keys from in-memory index tree'  (duration: 136.754334ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T20:33:54.147716Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1342}
	{"level":"info","ts":"2024-01-03T20:33:54.149493Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1342,"took":"1.521028ms","hash":2187885179}
	{"level":"info","ts":"2024-01-03T20:33:54.149556Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2187885179,"revision":1342,"compact-revision":1098}
	
	
	==> kernel <==
	 20:34:38 up 21 min,  0 users,  load average: 0.14, 0.24, 0.17
	Linux default-k8s-diff-port-018788 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ce56b3ad3d4b7d59eb61a980afcc5105c128779756da52fc886d00597b03c6cc] <==
	W0103 20:29:56.716020       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:29:56.716045       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:29:56.716052       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:30:55.586770       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0103 20:31:55.587215       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:31:56.715938       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:31:56.716056       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:31:56.716064       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:31:56.716237       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:31:56.716290       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:31:56.717166       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0103 20:32:55.586732       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0103 20:33:55.586662       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:33:55.719977       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:33:55.720162       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:33:55.720618       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0103 20:33:56.720686       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:33:56.720791       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0103 20:33:56.720828       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0103 20:33:56.720808       1 handler_proxy.go:93] no RequestInfo found in the context
	E0103 20:33:56.720947       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0103 20:33:56.722012       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2b7de3342fdb5423de182fd009630fc36cee11de3a5f5ff06603fedd80e2d94b] <==
	I0103 20:28:41.438609       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:29:11.025063       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:29:11.447433       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:29:41.031158       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:29:41.455974       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:30:09.794746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="372.607µs"
	E0103 20:30:11.035717       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:30:11.464679       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0103 20:30:20.793436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="135.502µs"
	E0103 20:30:41.046678       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:30:41.473220       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:31:11.053577       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:31:11.481789       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:31:41.059053       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:31:41.493596       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:32:11.068380       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:32:11.502564       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:32:41.073657       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:32:41.512002       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:33:11.080176       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:33:11.525712       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:33:41.086427       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:33:41.535482       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0103 20:34:11.099839       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0103 20:34:11.548401       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b1525243614b036cac3291b5a2fbf29c28fdb68757d81418e7e09cfec2c36032] <==
	I0103 20:14:02.083263       1 server_others.go:69] "Using iptables proxy"
	I0103 20:14:02.110269       1 node.go:141] Successfully retrieved node IP: 192.168.39.139
	I0103 20:14:02.214438       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0103 20:14:02.214519       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0103 20:14:02.217341       1 server_others.go:152] "Using iptables Proxier"
	I0103 20:14:02.217536       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 20:14:02.217714       1 server.go:846] "Version info" version="v1.28.4"
	I0103 20:14:02.221329       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:14:02.222827       1 config.go:188] "Starting service config controller"
	I0103 20:14:02.222908       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 20:14:02.222965       1 config.go:97] "Starting endpoint slice config controller"
	I0103 20:14:02.222986       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 20:14:02.228416       1 config.go:315] "Starting node config controller"
	I0103 20:14:02.228492       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 20:14:02.323511       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 20:14:02.323616       1 shared_informer.go:318] Caches are synced for service config
	I0103 20:14:02.328743       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [abbaa7d1ca8582bf77f6d0951a5eca6711a89471f2239b41df4bd359e01e5a7c] <==
	I0103 20:13:52.770655       1 serving.go:348] Generated self-signed cert in-memory
	W0103 20:13:55.641836       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:13:55.641943       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:13:55.641988       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:13:55.642013       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:13:55.727971       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0103 20:13:55.728190       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:13:55.731897       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:13:55.731989       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:13:55.734983       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:13:55.735213       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:13:55.834363       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-03 20:13:21 UTC, ends at Wed 2024-01-03 20:34:38 UTC. --
	Jan 03 20:31:48 default-k8s-diff-port-018788 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:31:48 default-k8s-diff-port-018788 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:31:48 default-k8s-diff-port-018788 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:31:56 default-k8s-diff-port-018788 kubelet[930]: E0103 20:31:56.776842     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:32:07 default-k8s-diff-port-018788 kubelet[930]: E0103 20:32:07.776169     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:32:20 default-k8s-diff-port-018788 kubelet[930]: E0103 20:32:20.777148     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:32:32 default-k8s-diff-port-018788 kubelet[930]: E0103 20:32:32.776002     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:32:45 default-k8s-diff-port-018788 kubelet[930]: E0103 20:32:45.775880     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:32:48 default-k8s-diff-port-018788 kubelet[930]: E0103 20:32:48.797750     930 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:32:48 default-k8s-diff-port-018788 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:32:48 default-k8s-diff-port-018788 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:32:48 default-k8s-diff-port-018788 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:32:58 default-k8s-diff-port-018788 kubelet[930]: E0103 20:32:58.778524     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:33:11 default-k8s-diff-port-018788 kubelet[930]: E0103 20:33:11.775894     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:33:25 default-k8s-diff-port-018788 kubelet[930]: E0103 20:33:25.775851     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:33:37 default-k8s-diff-port-018788 kubelet[930]: E0103 20:33:37.776317     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:33:48 default-k8s-diff-port-018788 kubelet[930]: E0103 20:33:48.794841     930 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 03 20:33:48 default-k8s-diff-port-018788 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 03 20:33:48 default-k8s-diff-port-018788 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 03 20:33:48 default-k8s-diff-port-018788 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 03 20:33:48 default-k8s-diff-port-018788 kubelet[930]: E0103 20:33:48.820977     930 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 03 20:33:50 default-k8s-diff-port-018788 kubelet[930]: E0103 20:33:50.776342     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:34:02 default-k8s-diff-port-018788 kubelet[930]: E0103 20:34:02.775543     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:34:14 default-k8s-diff-port-018788 kubelet[930]: E0103 20:34:14.775748     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	Jan 03 20:34:27 default-k8s-diff-port-018788 kubelet[930]: E0103 20:34:27.776896     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pgbbj" podUID="ee3963d9-1627-4e78-91e5-1f92c2011f4b"
	
	
	==> storage-provisioner [365147e198ba529e04c62145182c0e5131c1812d4b8842299d0236cbf556e40f] <==
	I0103 20:14:01.995617       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0103 20:14:32.036225       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3d1fa8b05cd7cc8dd982013b6295fb6f79f2c8db93aa0c408fc57ea7e898125a] <==
	I0103 20:14:33.158472       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:14:33.170158       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:14:33.170273       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:14:50.584462       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:14:50.584776       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-018788_a2335e3e-d422-40a0-ba4c-1fdc7c29325b!
	I0103 20:14:50.587381       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9d22760a-d369-4f87-9839-fef853b9b5b7", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-018788_a2335e3e-d422-40a0-ba4c-1fdc7c29325b became leader
	I0103 20:14:50.685054       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-018788_a2335e3e-d422-40a0-ba4c-1fdc7c29325b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-018788 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pgbbj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-018788 describe pod metrics-server-57f55c9bc5-pgbbj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-018788 describe pod metrics-server-57f55c9bc5-pgbbj: exit status 1 (66.394083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pgbbj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-018788 describe pod metrics-server-57f55c9bc5-pgbbj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (432.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (140.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-195281 --alsologtostderr -v=3
E0103 20:33:32.532432   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:33:58.358581   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 20:34:07.102691   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-195281 --alsologtostderr -v=3: exit status 82 (2m1.794660026s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-195281"  ...
	* Stopping node "newest-cni-195281"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:33:20.483577   67858 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:33:20.483914   67858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:33:20.483928   67858 out.go:309] Setting ErrFile to fd 2...
	I0103 20:33:20.483935   67858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:33:20.484162   67858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 20:33:20.484487   67858 out.go:303] Setting JSON to false
	I0103 20:33:20.484585   67858 mustload.go:65] Loading cluster: newest-cni-195281
	I0103 20:33:20.485055   67858 config.go:182] Loaded profile config "newest-cni-195281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:33:20.485159   67858 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/newest-cni-195281/config.json ...
	I0103 20:33:20.485316   67858 mustload.go:65] Loading cluster: newest-cni-195281
	I0103 20:33:20.485427   67858 config.go:182] Loaded profile config "newest-cni-195281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0103 20:33:20.485450   67858 stop.go:39] StopHost: newest-cni-195281
	I0103 20:33:20.485953   67858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:33:20.486004   67858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:33:20.502888   67858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0103 20:33:20.503372   67858 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:33:20.504023   67858 main.go:141] libmachine: Using API Version  1
	I0103 20:33:20.504050   67858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:33:20.504506   67858 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:33:20.507416   67858 out.go:177] * Stopping node "newest-cni-195281"  ...
	I0103 20:33:20.509433   67858 main.go:141] libmachine: Stopping "newest-cni-195281"...
	I0103 20:33:20.509455   67858 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:33:20.511481   67858 main.go:141] libmachine: (newest-cni-195281) Calling .Stop
	I0103 20:33:20.515748   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 0/60
	I0103 20:33:21.517270   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 1/60
	I0103 20:33:22.518701   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 2/60
	I0103 20:33:23.520528   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 3/60
	I0103 20:33:24.522666   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 4/60
	I0103 20:33:25.524769   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 5/60
	I0103 20:33:26.526243   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 6/60
	I0103 20:33:27.528105   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 7/60
	I0103 20:33:28.529711   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 8/60
	I0103 20:33:29.531336   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 9/60
	I0103 20:33:30.533533   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 10/60
	I0103 20:33:31.535014   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 11/60
	I0103 20:33:32.537078   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 12/60
	I0103 20:33:33.538791   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 13/60
	I0103 20:33:34.540361   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 14/60
	I0103 20:33:35.542642   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 15/60
	I0103 20:33:36.543971   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 16/60
	I0103 20:33:37.546346   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 17/60
	I0103 20:33:38.547642   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 18/60
	I0103 20:33:39.549010   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 19/60
	I0103 20:33:40.551744   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 20/60
	I0103 20:33:41.553559   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 21/60
	I0103 20:33:42.555581   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 22/60
	I0103 20:33:43.557148   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 23/60
	I0103 20:33:44.559374   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 24/60
	I0103 20:33:45.561493   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 25/60
	I0103 20:33:46.562984   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 26/60
	I0103 20:33:47.565134   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 27/60
	I0103 20:33:48.566989   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 28/60
	I0103 20:33:49.569062   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 29/60
	I0103 20:33:50.571080   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 30/60
	I0103 20:33:51.573071   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 31/60
	I0103 20:33:52.574824   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 32/60
	I0103 20:33:53.576965   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 33/60
	I0103 20:33:54.578333   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 34/60
	I0103 20:33:55.580447   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 35/60
	I0103 20:33:56.581803   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 36/60
	I0103 20:33:57.584104   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 37/60
	I0103 20:33:58.585924   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 38/60
	I0103 20:33:59.587696   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 39/60
	I0103 20:34:00.590326   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 40/60
	I0103 20:34:01.592106   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 41/60
	I0103 20:34:02.594142   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 42/60
	I0103 20:34:03.595891   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 43/60
	I0103 20:34:04.597433   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 44/60
	I0103 20:34:05.599321   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 45/60
	I0103 20:34:06.601455   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 46/60
	I0103 20:34:07.603009   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 47/60
	I0103 20:34:08.605291   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 48/60
	I0103 20:34:09.606682   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 49/60
	I0103 20:34:10.608490   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 50/60
	I0103 20:34:11.610249   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 51/60
	I0103 20:34:12.611942   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 52/60
	I0103 20:34:13.613764   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 53/60
	I0103 20:34:14.615284   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 54/60
	I0103 20:34:15.617274   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 55/60
	I0103 20:34:16.618849   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 56/60
	I0103 20:34:17.621081   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 57/60
	I0103 20:34:18.623514   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 58/60
	I0103 20:34:19.625170   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 59/60
	I0103 20:34:20.626617   67858 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:34:20.626667   67858 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:34:20.626690   67858 retry.go:31] will retry after 1.448369911s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:34:22.075365   67858 stop.go:39] StopHost: newest-cni-195281
	I0103 20:34:22.075834   67858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 20:34:22.075886   67858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 20:34:22.090291   67858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0103 20:34:22.090819   67858 main.go:141] libmachine: () Calling .GetVersion
	I0103 20:34:22.091292   67858 main.go:141] libmachine: Using API Version  1
	I0103 20:34:22.091310   67858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 20:34:22.091617   67858 main.go:141] libmachine: () Calling .GetMachineName
	I0103 20:34:22.094040   67858 out.go:177] * Stopping node "newest-cni-195281"  ...
	I0103 20:34:22.095510   67858 main.go:141] libmachine: Stopping "newest-cni-195281"...
	I0103 20:34:22.095529   67858 main.go:141] libmachine: (newest-cni-195281) Calling .GetState
	I0103 20:34:22.097339   67858 main.go:141] libmachine: (newest-cni-195281) Calling .Stop
	I0103 20:34:22.100838   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 0/60
	I0103 20:34:23.102675   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 1/60
	I0103 20:34:24.104270   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 2/60
	I0103 20:34:25.105605   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 3/60
	I0103 20:34:26.107511   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 4/60
	I0103 20:34:27.109314   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 5/60
	I0103 20:34:28.111666   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 6/60
	I0103 20:34:29.113497   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 7/60
	I0103 20:34:30.114905   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 8/60
	I0103 20:34:31.117298   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 9/60
	I0103 20:34:32.119382   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 10/60
	I0103 20:34:33.120720   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 11/60
	I0103 20:34:34.122290   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 12/60
	I0103 20:34:35.123611   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 13/60
	I0103 20:34:36.125270   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 14/60
	I0103 20:34:37.127341   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 15/60
	I0103 20:34:38.129288   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 16/60
	I0103 20:34:39.130863   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 17/60
	I0103 20:34:40.132256   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 18/60
	I0103 20:34:41.133777   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 19/60
	I0103 20:34:42.135737   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 20/60
	I0103 20:34:43.137605   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 21/60
	I0103 20:34:44.139166   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 22/60
	I0103 20:34:45.140776   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 23/60
	I0103 20:34:46.142343   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 24/60
	I0103 20:34:47.144484   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 25/60
	I0103 20:34:48.145877   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 26/60
	I0103 20:34:49.147301   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 27/60
	I0103 20:34:50.149034   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 28/60
	I0103 20:34:51.150800   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 29/60
	I0103 20:34:52.152678   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 30/60
	I0103 20:34:53.154213   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 31/60
	I0103 20:34:54.155968   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 32/60
	I0103 20:34:55.157475   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 33/60
	I0103 20:34:56.158870   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 34/60
	I0103 20:34:57.160757   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 35/60
	I0103 20:34:58.162496   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 36/60
	I0103 20:34:59.164067   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 37/60
	I0103 20:35:00.165580   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 38/60
	I0103 20:35:01.167187   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 39/60
	I0103 20:35:02.169198   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 40/60
	I0103 20:35:03.170858   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 41/60
	I0103 20:35:04.172227   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 42/60
	I0103 20:35:05.173684   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 43/60
	I0103 20:35:06.175246   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 44/60
	I0103 20:35:07.177247   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 45/60
	I0103 20:35:08.178905   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 46/60
	I0103 20:35:09.180652   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 47/60
	I0103 20:35:10.182141   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 48/60
	I0103 20:35:11.183588   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 49/60
	I0103 20:35:12.185807   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 50/60
	I0103 20:35:13.187407   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 51/60
	I0103 20:35:14.189002   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 52/60
	I0103 20:35:15.190678   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 53/60
	I0103 20:35:16.192262   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 54/60
	I0103 20:35:17.194392   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 55/60
	I0103 20:35:18.195988   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 56/60
	I0103 20:35:19.197559   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 57/60
	I0103 20:35:20.199065   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 58/60
	I0103 20:35:21.200782   67858 main.go:141] libmachine: (newest-cni-195281) Waiting for machine to stop 59/60
	I0103 20:35:22.201863   67858 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0103 20:35:22.201906   67858 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0103 20:35:22.204116   67858 out.go:177] 
	W0103 20:35:22.205790   67858 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0103 20:35:22.205811   67858 out.go:239] * 
	* 
	W0103 20:35:22.208113   67858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 20:35:22.209504   67858 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-195281 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-195281 -n newest-cni-195281
E0103 20:35:26.968954   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-195281 -n newest-cni-195281: exit status 3 (18.532051597s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:35:40.742822   68567 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.219:22: connect: no route to host
	E0103 20:35:40.742841   68567 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.219:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-195281" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (140.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-195281 -n newest-cni-195281
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-195281 -n newest-cni-195281: exit status 3 (3.199533645s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:35:43.942901   68641 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.219:22: connect: no route to host
	E0103 20:35:43.942924   68641 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.219:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-195281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0103 20:35:47.449397   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.crt: no such file or directory
E0103 20:35:48.654177   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-195281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15328666s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.219:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-195281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-195281 -n newest-cni-195281
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-195281 -n newest-cni-195281: exit status 3 (3.062161546s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:35:53.158870   68698 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.219:22: connect: no route to host
	E0103 20:35:53.158891   68698 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.219:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-195281" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.42s)

                                                
                                    

Test pass (233/300)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 23.32
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 14.91
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.2/json-events 17.36
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
26 TestBinaryMirror 0.58
27 TestOffline 102.98
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 163.86
34 TestAddons/parallel/Registry 18.15
36 TestAddons/parallel/InspektorGadget 10.97
37 TestAddons/parallel/MetricsServer 6.93
38 TestAddons/parallel/HelmTiller 11.93
40 TestAddons/parallel/CSI 102.75
41 TestAddons/parallel/Headlamp 15.6
42 TestAddons/parallel/CloudSpanner 7.03
43 TestAddons/parallel/LocalPath 14.26
44 TestAddons/parallel/NvidiaDevicePlugin 5.67
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.11
50 TestCertOptions 51.7
51 TestCertExpiration 307.15
53 TestForceSystemdFlag 51.98
54 TestForceSystemdEnv 72.44
56 TestKVMDriverInstallOrUpdate 6.72
60 TestErrorSpam/setup 46.08
61 TestErrorSpam/start 0.39
62 TestErrorSpam/status 0.76
63 TestErrorSpam/pause 1.57
64 TestErrorSpam/unpause 1.73
65 TestErrorSpam/stop 2.27
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 103.17
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 40.18
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.08
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
77 TestFunctional/serial/CacheCmd/cache/add_local 2.13
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.06
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.78
82 TestFunctional/serial/CacheCmd/cache/delete 0.12
83 TestFunctional/serial/MinikubeKubectlCmd 0.12
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
85 TestFunctional/serial/ExtraConfig 36.12
86 TestFunctional/serial/ComponentHealth 0.06
87 TestFunctional/serial/LogsCmd 1.52
88 TestFunctional/serial/LogsFileCmd 1.5
89 TestFunctional/serial/InvalidService 5.58
91 TestFunctional/parallel/ConfigCmd 0.44
92 TestFunctional/parallel/DashboardCmd 19.22
93 TestFunctional/parallel/DryRun 0.3
94 TestFunctional/parallel/InternationalLanguage 0.19
95 TestFunctional/parallel/StatusCmd 0.87
99 TestFunctional/parallel/ServiceCmdConnect 25.78
100 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/PersistentVolumeClaim 55.38
103 TestFunctional/parallel/SSHCmd 0.46
104 TestFunctional/parallel/CpCmd 1.49
105 TestFunctional/parallel/MySQL 24.65
106 TestFunctional/parallel/FileSync 0.27
107 TestFunctional/parallel/CertSync 1.54
111 TestFunctional/parallel/NodeLabels 0.06
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
115 TestFunctional/parallel/License 0.6
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
119 TestFunctional/parallel/ServiceCmd/DeployApp 26.21
129 TestFunctional/parallel/Version/short 0.06
130 TestFunctional/parallel/Version/components 1.03
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
135 TestFunctional/parallel/ImageCommands/ImageBuild 4.1
136 TestFunctional/parallel/ImageCommands/Setup 2
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.66
138 TestFunctional/parallel/ServiceCmd/List 0.53
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
142 TestFunctional/parallel/ProfileCmd/profile_list 0.3
143 TestFunctional/parallel/ServiceCmd/Format 0.35
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
145 TestFunctional/parallel/ServiceCmd/URL 0.34
146 TestFunctional/parallel/MountCmd/any-port 9.67
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.64
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.21
149 TestFunctional/parallel/MountCmd/specific-port 2.24
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.15
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.76
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.31
155 TestFunctional/delete_addon-resizer_images 0.07
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestIngressAddonLegacy/StartLegacyK8sCluster 122.02
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.48
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
168 TestJSONOutput/start/Command 61.42
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.66
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.65
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 7.11
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.22
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 91.86
200 TestMountStart/serial/StartWithMountFirst 25.77
201 TestMountStart/serial/VerifyMountFirst 0.42
202 TestMountStart/serial/StartWithMountSecond 25.11
203 TestMountStart/serial/VerifyMountSecond 0.39
204 TestMountStart/serial/DeleteFirst 0.89
205 TestMountStart/serial/VerifyMountPostDelete 0.41
206 TestMountStart/serial/Stop 1.1
207 TestMountStart/serial/RestartStopped 22.05
208 TestMountStart/serial/VerifyMountPostStop 0.41
211 TestMultiNode/serial/FreshStart2Nodes 106.21
212 TestMultiNode/serial/DeployApp2Nodes 5.49
214 TestMultiNode/serial/AddNode 43.4
215 TestMultiNode/serial/MultiNodeLabels 0.06
216 TestMultiNode/serial/ProfileList 0.21
217 TestMultiNode/serial/CopyFile 7.59
218 TestMultiNode/serial/StopNode 2.23
219 TestMultiNode/serial/StartAfterStop 29.89
221 TestMultiNode/serial/DeleteNode 1.57
223 TestMultiNode/serial/RestartMultiNode 446.07
224 TestMultiNode/serial/ValidateNameConflict 47.88
231 TestScheduledStopUnix 114.8
237 TestKubernetesUpgrade 159.41
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
244 TestNoKubernetes/serial/StartWithK8s 96.71
249 TestNetworkPlugins/group/false 3.43
253 TestNoKubernetes/serial/StartWithStopK8s 7.83
254 TestNoKubernetes/serial/Start 55.96
255 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
256 TestNoKubernetes/serial/ProfileList 0.79
257 TestNoKubernetes/serial/Stop 1.26
258 TestNoKubernetes/serial/StartNoArgs 41.34
259 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
260 TestStoppedBinaryUpgrade/Setup 1.68
270 TestPause/serial/Start 69.02
271 TestNetworkPlugins/group/auto/Start 65.57
272 TestNetworkPlugins/group/kindnet/Start 84.14
274 TestNetworkPlugins/group/auto/KubeletFlags 0.28
275 TestNetworkPlugins/group/auto/NetCatPod 12.33
276 TestNetworkPlugins/group/auto/DNS 0.18
277 TestNetworkPlugins/group/auto/Localhost 0.15
278 TestNetworkPlugins/group/auto/HairPin 0.15
279 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
280 TestNetworkPlugins/group/calico/Start 99.99
281 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
282 TestNetworkPlugins/group/kindnet/NetCatPod 13.24
283 TestNetworkPlugins/group/custom-flannel/Start 94.68
284 TestNetworkPlugins/group/kindnet/DNS 0.16
285 TestNetworkPlugins/group/kindnet/Localhost 0.15
286 TestNetworkPlugins/group/kindnet/HairPin 0.15
287 TestNetworkPlugins/group/enable-default-cni/Start 120.67
288 TestNetworkPlugins/group/calico/ControllerPod 6.01
289 TestNetworkPlugins/group/calico/KubeletFlags 0.25
290 TestNetworkPlugins/group/calico/NetCatPod 12.27
291 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
292 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.27
293 TestNetworkPlugins/group/calico/DNS 0.2
294 TestNetworkPlugins/group/calico/Localhost 0.15
295 TestNetworkPlugins/group/calico/HairPin 0.17
296 TestNetworkPlugins/group/custom-flannel/DNS 0.2
297 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
298 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
299 TestStoppedBinaryUpgrade/MinikubeLogs 0.49
300 TestNetworkPlugins/group/flannel/Start 91.72
301 TestNetworkPlugins/group/bridge/Start 122.05
303 TestStartStop/group/old-k8s-version/serial/FirstStart 171.07
304 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
305 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.27
306 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
307 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
308 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
310 TestStartStop/group/no-preload/serial/FirstStart 200.32
311 TestNetworkPlugins/group/flannel/ControllerPod 6.01
312 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
313 TestNetworkPlugins/group/flannel/NetCatPod 11.25
314 TestNetworkPlugins/group/flannel/DNS 0.21
315 TestNetworkPlugins/group/flannel/Localhost 0.15
316 TestNetworkPlugins/group/flannel/HairPin 0.17
317 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
319 TestStartStop/group/embed-certs/serial/FirstStart 101.53
320 TestNetworkPlugins/group/bridge/NetCatPod 11.3
321 TestNetworkPlugins/group/bridge/DNS 0.18
322 TestNetworkPlugins/group/bridge/Localhost 0.16
323 TestNetworkPlugins/group/bridge/HairPin 0.17
325 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 105.23
326 TestStartStop/group/old-k8s-version/serial/DeployApp 13.52
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.26
329 TestStartStop/group/embed-certs/serial/DeployApp 9.31
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
332 TestStartStop/group/no-preload/serial/DeployApp 9.3
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
334 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
339 TestStartStop/group/old-k8s-version/serial/SecondStart 399.05
341 TestStartStop/group/embed-certs/serial/SecondStart 545.04
344 TestStartStop/group/no-preload/serial/SecondStart 549.79
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 557.7
355 TestStartStop/group/newest-cni/serial/FirstStart 59.68
356 TestStartStop/group/newest-cni/serial/DeployApp 0
357 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
360 TestStartStop/group/newest-cni/serial/SecondStart 328.88
361 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
364 TestStartStop/group/newest-cni/serial/Pause 2.71
x
+
TestDownloadOnly/v1.16.0/json-events (23.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-302470 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-302470 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.3198503s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (23.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-302470
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-302470: exit status 85 (71.170817ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-302470 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-302470        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 18:57:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 18:57:14.630222   16807 out.go:296] Setting OutFile to fd 1 ...
	I0103 18:57:14.630338   16807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:14.630350   16807 out.go:309] Setting ErrFile to fd 2...
	I0103 18:57:14.630357   16807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:14.630634   16807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	W0103 18:57:14.630786   16807 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17885-9609/.minikube/config/config.json: open /home/jenkins/minikube-integration/17885-9609/.minikube/config/config.json: no such file or directory
	I0103 18:57:14.631443   16807 out.go:303] Setting JSON to true
	I0103 18:57:14.632373   16807 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2382,"bootTime":1704305853,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 18:57:14.632446   16807 start.go:138] virtualization: kvm guest
	I0103 18:57:14.635072   16807 out.go:97] [download-only-302470] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 18:57:14.636564   16807 out.go:169] MINIKUBE_LOCATION=17885
	W0103 18:57:14.635198   16807 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball: no such file or directory
	I0103 18:57:14.635282   16807 notify.go:220] Checking for updates...
	I0103 18:57:14.639232   16807 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 18:57:14.640919   16807 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 18:57:14.642623   16807 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 18:57:14.643884   16807 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0103 18:57:14.646173   16807 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 18:57:14.646374   16807 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 18:57:14.749279   16807 out.go:97] Using the kvm2 driver based on user configuration
	I0103 18:57:14.749311   16807 start.go:298] selected driver: kvm2
	I0103 18:57:14.749317   16807 start.go:902] validating driver "kvm2" against <nil>
	I0103 18:57:14.749649   16807 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 18:57:14.749764   16807 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 18:57:14.764871   16807 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 18:57:14.764922   16807 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 18:57:14.765416   16807 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0103 18:57:14.765575   16807 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0103 18:57:14.765631   16807 cni.go:84] Creating CNI manager for ""
	I0103 18:57:14.765644   16807 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 18:57:14.765653   16807 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0103 18:57:14.765658   16807 start_flags.go:323] config:
	{Name:download-only-302470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-302470 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:57:14.765849   16807 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 18:57:14.767770   16807 out.go:97] Downloading VM boot image ...
	I0103 18:57:14.767814   16807 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0103 18:57:23.328789   16807 out.go:97] Starting control plane node download-only-302470 in cluster download-only-302470
	I0103 18:57:23.328809   16807 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 18:57:23.426157   16807 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0103 18:57:23.426204   16807 cache.go:56] Caching tarball of preloaded images
	I0103 18:57:23.426340   16807 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 18:57:23.428311   16807 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0103 18:57:23.428343   16807 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:57:23.531591   16807 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-302470"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (14.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-302470 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-302470 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.911690341s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (14.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-302470
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-302470: exit status 85 (71.829902ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-302470 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-302470        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-302470 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-302470        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 18:57:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 18:57:38.027332   16898 out.go:296] Setting OutFile to fd 1 ...
	I0103 18:57:38.027590   16898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:38.027601   16898 out.go:309] Setting ErrFile to fd 2...
	I0103 18:57:38.027607   16898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:38.027811   16898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	W0103 18:57:38.027957   16898 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17885-9609/.minikube/config/config.json: open /home/jenkins/minikube-integration/17885-9609/.minikube/config/config.json: no such file or directory
	I0103 18:57:38.028402   16898 out.go:303] Setting JSON to true
	I0103 18:57:38.029171   16898 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2405,"bootTime":1704305853,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 18:57:38.029244   16898 start.go:138] virtualization: kvm guest
	I0103 18:57:38.031632   16898 out.go:97] [download-only-302470] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 18:57:38.031753   16898 notify.go:220] Checking for updates...
	I0103 18:57:38.033243   16898 out.go:169] MINIKUBE_LOCATION=17885
	I0103 18:57:38.034882   16898 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 18:57:38.036546   16898 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 18:57:38.037997   16898 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 18:57:38.039557   16898 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0103 18:57:38.042848   16898 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 18:57:38.043296   16898 config.go:182] Loaded profile config "download-only-302470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0103 18:57:38.043344   16898 start.go:810] api.Load failed for download-only-302470: filestore "download-only-302470": Docker machine "download-only-302470" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 18:57:38.043430   16898 driver.go:392] Setting default libvirt URI to qemu:///system
	W0103 18:57:38.043459   16898 start.go:810] api.Load failed for download-only-302470: filestore "download-only-302470": Docker machine "download-only-302470" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 18:57:38.076117   16898 out.go:97] Using the kvm2 driver based on existing profile
	I0103 18:57:38.076153   16898 start.go:298] selected driver: kvm2
	I0103 18:57:38.076162   16898 start.go:902] validating driver "kvm2" against &{Name:download-only-302470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-302470 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:57:38.076693   16898 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 18:57:38.076781   16898 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 18:57:38.091504   16898 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 18:57:38.092277   16898 cni.go:84] Creating CNI manager for ""
	I0103 18:57:38.092298   16898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 18:57:38.092310   16898 start_flags.go:323] config:
	{Name:download-only-302470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-302470 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:57:38.092438   16898 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 18:57:38.094358   16898 out.go:97] Starting control plane node download-only-302470 in cluster download-only-302470
	I0103 18:57:38.094374   16898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:57:38.226628   16898 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 18:57:38.226661   16898 cache.go:56] Caching tarball of preloaded images
	I0103 18:57:38.226821   16898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:57:38.228720   16898 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0103 18:57:38.228735   16898 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:57:38.332063   16898 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-302470"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (17.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-302470 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-302470 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.356234935s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (17.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-302470
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-302470: exit status 85 (72.950835ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-302470 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-302470           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-302470 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-302470           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-302470 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-302470           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 18:57:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 18:57:53.007681   16980 out.go:296] Setting OutFile to fd 1 ...
	I0103 18:57:53.007834   16980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:53.007845   16980 out.go:309] Setting ErrFile to fd 2...
	I0103 18:57:53.007853   16980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:53.008076   16980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	W0103 18:57:53.008203   16980 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17885-9609/.minikube/config/config.json: open /home/jenkins/minikube-integration/17885-9609/.minikube/config/config.json: no such file or directory
	I0103 18:57:53.008633   16980 out.go:303] Setting JSON to true
	I0103 18:57:53.009439   16980 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2420,"bootTime":1704305853,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 18:57:53.009511   16980 start.go:138] virtualization: kvm guest
	I0103 18:57:53.011705   16980 out.go:97] [download-only-302470] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 18:57:53.013289   16980 out.go:169] MINIKUBE_LOCATION=17885
	I0103 18:57:53.011868   16980 notify.go:220] Checking for updates...
	I0103 18:57:53.016057   16980 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 18:57:53.017582   16980 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 18:57:53.018909   16980 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 18:57:53.020246   16980 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0103 18:57:53.022753   16980 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 18:57:53.023334   16980 config.go:182] Loaded profile config "download-only-302470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0103 18:57:53.023391   16980 start.go:810] api.Load failed for download-only-302470: filestore "download-only-302470": Docker machine "download-only-302470" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 18:57:53.023482   16980 driver.go:392] Setting default libvirt URI to qemu:///system
	W0103 18:57:53.023534   16980 start.go:810] api.Load failed for download-only-302470: filestore "download-only-302470": Docker machine "download-only-302470" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 18:57:53.056006   16980 out.go:97] Using the kvm2 driver based on existing profile
	I0103 18:57:53.056036   16980 start.go:298] selected driver: kvm2
	I0103 18:57:53.056042   16980 start.go:902] validating driver "kvm2" against &{Name:download-only-302470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-302470 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:57:53.056446   16980 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 18:57:53.056520   16980 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17885-9609/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0103 18:57:53.070486   16980 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0103 18:57:53.071243   16980 cni.go:84] Creating CNI manager for ""
	I0103 18:57:53.071265   16980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0103 18:57:53.071280   16980 start_flags.go:323] config:
	{Name:download-only-302470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-302470 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:57:53.071421   16980 iso.go:125] acquiring lock: {Name:mk59d09085a9554144b68de9b7bfe0e0fce53cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 18:57:53.073461   16980 out.go:97] Starting control plane node download-only-302470 in cluster download-only-302470
	I0103 18:57:53.073478   16980 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 18:57:53.207889   16980 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0103 18:57:53.207926   16980 cache.go:56] Caching tarball of preloaded images
	I0103 18:57:53.208094   16980 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 18:57:53.210163   16980 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0103 18:57:53.210192   16980 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:57:53.310204   16980 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:2e182f4d7475b49e22eaf15ea22c281b -> /home/jenkins/minikube-integration/17885-9609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-302470"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-302470
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-440543 --alsologtostderr --binary-mirror http://127.0.0.1:33485 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-440543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-440543
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (102.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-878561 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-878561 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.876479387s)
helpers_test.go:175: Cleaning up "offline-crio-878561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-878561
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-878561: (1.102446384s)
--- PASS: TestOffline (102.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-848866
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-848866: exit status 85 (65.557715ms)

                                                
                                                
-- stdout --
	* Profile "addons-848866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-848866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-848866
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-848866: exit status 85 (63.046566ms)

                                                
                                                
-- stdout --
	* Profile "addons-848866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-848866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (163.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-848866 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-848866 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m43.861761544s)
--- PASS: TestAddons/Setup (163.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 33.795194ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vb8nh" [8239cd82-c41f-448e-b099-83140af6d1b5] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005959209s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-glv5v" [22a80b4a-fe0d-4fe5-a339-e484f216e167] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004875162s
addons_test.go:340: (dbg) Run:  kubectl --context addons-848866 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-848866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-848866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.116077546s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 ip
2024/01/03 19:01:12 [DEBUG] GET http://192.168.39.253:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-twhm9" [2c81af71-dd35-42e1-80f2-bb9aeb4f69ad] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005758385s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-848866
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-848866: (5.962212707s)
--- PASS: TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 29.50092ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-vxk9c" [b3acd530-1430-4c25-9c02-6706eb256850] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005299011s
addons_test.go:415: (dbg) Run:  kubectl --context addons-848866 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.93s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.138556ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-gjh42" [d2fde79f-5c98-4c33-920b-7f58e5b30565] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005795594s
addons_test.go:473: (dbg) Run:  kubectl --context addons-848866 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-848866 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.150703945s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (102.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 32.070415ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-848866 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-848866 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c5c1b49b-c086-4222-a834-cd05acfc84a4] Pending
helpers_test.go:344: "task-pv-pod" [c5c1b49b-c086-4222-a834-cd05acfc84a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c5c1b49b-c086-4222-a834-cd05acfc84a4] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004819835s
addons_test.go:584: (dbg) Run:  kubectl --context addons-848866 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-848866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-848866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-848866 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-848866 delete pod task-pv-pod: (1.172682533s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-848866 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-848866 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-848866 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [26b5e328-52bd-4e75-8c8c-002908f82d63] Pending
helpers_test.go:344: "task-pv-pod-restore" [26b5e328-52bd-4e75-8c8c-002908f82d63] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [26b5e328-52bd-4e75-8c8c-002908f82d63] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004614538s
addons_test.go:626: (dbg) Run:  kubectl --context addons-848866 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-848866 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-848866 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-848866 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.829016809s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (102.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-848866 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-848866 --alsologtostderr -v=1: (1.591781822s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-87ghr" [ab26fec3-2021-46e5-a32f-d3e34f48e93a] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-87ghr" [ab26fec3-2021-46e5-a32f-d3e34f48e93a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-87ghr" [ab26fec3-2021-46e5-a32f-d3e34f48e93a] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.003977189s
--- PASS: TestAddons/parallel/Headlamp (15.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-ksd6t" [0c715f98-f81e-430d-8351-f057b451acba] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004533324s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-848866
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-848866: (1.015825947s)
--- PASS: TestAddons/parallel/CloudSpanner (7.03s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-848866 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-848866 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848866 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8cdcf02e-568f-4827-b62c-666b22674994] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8cdcf02e-568f-4827-b62c-666b22674994] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8cdcf02e-568f-4827-b62c-666b22674994] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003885672s
addons_test.go:891: (dbg) Run:  kubectl --context addons-848866 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 ssh "cat /opt/local-path-provisioner/pvc-63737751-111f-49e1-b285-e8695e5515cf_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-848866 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-848866 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-848866 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r7lx5" [8aa19cd3-113d-4ffc-bc90-bb4545d5700d] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004761375s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-848866
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-lc6bn" [bdd000ec-a410-46ec-a4a2-558160f3340f] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00430097s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-848866 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-848866 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (51.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-940407 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-940407 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (49.610097148s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-940407 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-940407 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-940407 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-940407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-940407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-940407: (1.533513559s)
--- PASS: TestCertOptions (51.70s)

                                                
                                    
x
+
TestCertExpiration (307.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-948344 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0103 19:54:07.103017   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-948344 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m43.044807047s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-948344 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-948344 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (22.998571505s)
helpers_test.go:175: Cleaning up "cert-expiration-948344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-948344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-948344: (1.110429289s)
--- PASS: TestCertExpiration (307.15s)

                                                
                                    
x
+
TestForceSystemdFlag (51.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-094236 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-094236 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.811386176s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-094236 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-094236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-094236
--- PASS: TestForceSystemdFlag (51.98s)

                                                
                                    
x
+
TestForceSystemdEnv (72.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-892756 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-892756 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.434489142s)
helpers_test.go:175: Cleaning up "force-systemd-env-892756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-892756
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-892756: (1.002251513s)
--- PASS: TestForceSystemdEnv (72.44s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.72s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (6.72s)

                                                
                                    
x
+
TestErrorSpam/setup (46.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-226992 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-226992 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-226992 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-226992 --driver=kvm2  --container-runtime=crio: (46.075274353s)
--- PASS: TestErrorSpam/setup (46.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (2.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 stop: (2.095714596s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226992 --log_dir /tmp/nospam-226992 stop
--- PASS: TestErrorSpam/stop (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17885-9609/.minikube/files/etc/test/nested/copy/16795/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (103.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-166268 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-166268 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m43.168756456s)
--- PASS: TestFunctional/serial/StartWithProxy (103.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-166268 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-166268 --alsologtostderr -v=8: (40.181737199s)
functional_test.go:659: soft start took 40.182297632s for "functional-166268" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-166268 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 cache add registry.k8s.io/pause:3.1: (1.130025886s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 cache add registry.k8s.io/pause:3.3: (1.174413164s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 cache add registry.k8s.io/pause:latest: (1.240706068s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-166268 /tmp/TestFunctionalserialCacheCmdcacheadd_local87872329/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 cache add minikube-local-cache-test:functional-166268
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 cache add minikube-local-cache-test:functional-166268: (1.790630579s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 cache delete minikube-local-cache-test:functional-166268
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-166268
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (236.979013ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 kubectl -- --context functional-166268 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-166268 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-166268 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-166268 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.118540769s)
functional_test.go:757: restart took 36.118649183s for "functional-166268" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-166268 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 logs: (1.517994715s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 logs --file /tmp/TestFunctionalserialLogsFileCmd464887043/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 logs --file /tmp/TestFunctionalserialLogsFileCmd464887043/001/logs.txt: (1.502929712s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-166268 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-166268
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-166268: exit status 115 (299.170903ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.47:32749 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-166268 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-166268 delete -f testdata/invalidsvc.yaml: (2.070885027s)
--- PASS: TestFunctional/serial/InvalidService (5.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 config get cpus: exit status 14 (64.500346ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 config get cpus: exit status 14 (59.801181ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-166268 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-166268 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24533: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-166268 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-166268 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.182679ms)

                                                
                                                
-- stdout --
	* [functional-166268] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:11:30.931651   25124 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:11:30.931870   25124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:11:30.931907   25124 out.go:309] Setting ErrFile to fd 2...
	I0103 19:11:30.931915   25124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:11:30.932263   25124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:11:30.933069   25124 out.go:303] Setting JSON to false
	I0103 19:11:30.934480   25124 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3238,"bootTime":1704305853,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:11:30.934595   25124 start.go:138] virtualization: kvm guest
	I0103 19:11:30.937231   25124 out.go:177] * [functional-166268] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:11:30.938913   25124 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:11:30.938964   25124 notify.go:220] Checking for updates...
	I0103 19:11:30.940333   25124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:11:30.941763   25124 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:11:30.943371   25124 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:11:30.944716   25124 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:11:30.946027   25124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:11:30.947716   25124 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:11:30.948111   25124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:11:30.948169   25124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:11:30.962201   25124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0103 19:11:30.962609   25124 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:11:30.963169   25124 main.go:141] libmachine: Using API Version  1
	I0103 19:11:30.963189   25124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:11:30.963548   25124 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:11:30.963724   25124 main.go:141] libmachine: (functional-166268) Calling .DriverName
	I0103 19:11:30.963963   25124 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:11:30.964262   25124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:11:30.964309   25124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:11:30.978475   25124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0103 19:11:30.978943   25124 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:11:30.979472   25124 main.go:141] libmachine: Using API Version  1
	I0103 19:11:30.979499   25124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:11:30.979819   25124 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:11:30.979998   25124 main.go:141] libmachine: (functional-166268) Calling .DriverName
	I0103 19:11:31.012892   25124 out.go:177] * Using the kvm2 driver based on existing profile
	I0103 19:11:31.014590   25124 start.go:298] selected driver: kvm2
	I0103 19:11:31.014628   25124 start.go:902] validating driver "kvm2" against &{Name:functional-166268 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-166268 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.47 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:11:31.014802   25124 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:11:31.017440   25124 out.go:177] 
	W0103 19:11:31.019143   25124 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0103 19:11:31.020694   25124 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-166268 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-166268 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-166268 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (185.482834ms)

                                                
                                                
-- stdout --
	* [functional-166268] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:11:16.355670   23933 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:11:16.355880   23933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:11:16.355917   23933 out.go:309] Setting ErrFile to fd 2...
	I0103 19:11:16.355933   23933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:11:16.356239   23933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:11:16.356811   23933 out.go:303] Setting JSON to false
	I0103 19:11:16.357686   23933 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3224,"bootTime":1704305853,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:11:16.357770   23933 start.go:138] virtualization: kvm guest
	I0103 19:11:16.360510   23933 out.go:177] * [functional-166268] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0103 19:11:16.362402   23933 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:11:16.362440   23933 notify.go:220] Checking for updates...
	I0103 19:11:16.363862   23933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:11:16.365344   23933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:11:16.366919   23933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:11:16.368411   23933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:11:16.369941   23933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:11:16.371670   23933 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:11:16.372166   23933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:11:16.372238   23933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:11:16.386900   23933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0103 19:11:16.387305   23933 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:11:16.387859   23933 main.go:141] libmachine: Using API Version  1
	I0103 19:11:16.387885   23933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:11:16.388263   23933 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:11:16.388450   23933 main.go:141] libmachine: (functional-166268) Calling .DriverName
	I0103 19:11:16.388727   23933 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:11:16.389023   23933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:11:16.389067   23933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:11:16.404648   23933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
	I0103 19:11:16.405020   23933 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:11:16.405518   23933 main.go:141] libmachine: Using API Version  1
	I0103 19:11:16.405543   23933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:11:16.405898   23933 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:11:16.406083   23933 main.go:141] libmachine: (functional-166268) Calling .DriverName
	I0103 19:11:16.441408   23933 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0103 19:11:16.442978   23933 start.go:298] selected driver: kvm2
	I0103 19:11:16.442992   23933 start.go:902] validating driver "kvm2" against &{Name:functional-166268 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-166268 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.47 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:11:16.443112   23933 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:11:16.445446   23933 out.go:177] 
	W0103 19:11:16.446873   23933 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0103 19:11:16.448196   23933 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (25.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-166268 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-166268 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-556bf" [c7bfb78a-f98b-486a-abae-e751f1c5acef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0103 19:10:55.308013   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:10:55.314437   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-55497b8b78-556bf" [c7bfb78a-f98b-486a-abae-e751f1c5acef] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 25.004518001s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 service hello-node-connect --url
E0103 19:11:15.792655   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.47:30127
functional_test.go:1674: http://192.168.50.47:30127: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-556bf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.47:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.47:30127
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (25.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d16a16b6-d863-4ce0-b45a-a31a12a76529] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004776064s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-166268 get storageclass -o=json
E0103 19:10:55.325410   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:10:55.345829   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:10:55.386248   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-166268 apply -f testdata/storage-provisioner/pvc.yaml
E0103 19:10:55.466603   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-166268 get pvc myclaim -o=json
E0103 19:10:55.627723   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:10:55.948011   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:10:56.588986   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-166268 get pvc myclaim -o=json
E0103 19:10:57.869497   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-166268 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7b3fa69d-041e-4b8d-adef-6b8f8539f5e2] Pending
helpers_test.go:344: "sp-pod" [7b3fa69d-041e-4b8d-adef-6b8f8539f5e2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0103 19:11:00.430408   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:11:05.551639   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [7b3fa69d-041e-4b8d-adef-6b8f8539f5e2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.007239773s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-166268 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-166268 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-166268 delete -f testdata/storage-provisioner/pod.yaml: (1.246802298s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-166268 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d1739225-20b8-46f7-85ff-262d3277a537] Pending
helpers_test.go:344: "sp-pod" [d1739225-20b8-46f7-85ff-262d3277a537] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d1739225-20b8-46f7-85ff-262d3277a537] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.005476357s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-166268 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh -n functional-166268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 cp functional-166268:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2797875463/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh -n functional-166268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh -n functional-166268 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-166268 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-p6xrc" [5fbe2972-a1d6-4076-9603-f342931ffc86] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-p6xrc" [5fbe2972-a1d6-4076-9603-f342931ffc86] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.439408587s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-166268 exec mysql-859648c796-p6xrc -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-166268 exec mysql-859648c796-p6xrc -- mysql -ppassword -e "show databases;": exit status 1 (346.503412ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-166268 exec mysql-859648c796-p6xrc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/16795/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo cat /etc/test/nested/copy/16795/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/16795.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo cat /etc/ssl/certs/16795.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/16795.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo cat /usr/share/ca-certificates/16795.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/167952.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo cat /etc/ssl/certs/167952.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/167952.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo cat /usr/share/ca-certificates/167952.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-166268 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 ssh "sudo systemctl is-active docker": exit status 1 (230.743827ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 ssh "sudo systemctl is-active containerd": exit status 1 (256.713897ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (26.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-166268 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-166268 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-kf987" [56333343-e4f1-4a3c-a30c-d9845eb02b0b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-kf987" [56333343-e4f1-4a3c-a30c-d9845eb02b0b] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 26.005306326s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (26.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 version -o=json --components: (1.02551758s)
--- PASS: TestFunctional/parallel/Version/components (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-166268 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-166268
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-166268
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-166268 image ls --format short --alsologtostderr:
I0103 19:11:38.981715   25424 out.go:296] Setting OutFile to fd 1 ...
I0103 19:11:38.981834   25424 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:38.981845   25424 out.go:309] Setting ErrFile to fd 2...
I0103 19:11:38.981852   25424 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:38.982125   25424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
I0103 19:11:38.982904   25424 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:38.983054   25424 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:38.983588   25424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:38.983646   25424 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:38.998002   25424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40019
I0103 19:11:38.998498   25424 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:38.999105   25424 main.go:141] libmachine: Using API Version  1
I0103 19:11:38.999136   25424 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:38.999472   25424 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:38.999718   25424 main.go:141] libmachine: (functional-166268) Calling .GetState
I0103 19:11:39.001852   25424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:39.001898   25424 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:39.016709   25424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
I0103 19:11:39.017263   25424 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:39.017974   25424 main.go:141] libmachine: Using API Version  1
I0103 19:11:39.018003   25424 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:39.018334   25424 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:39.018778   25424 main.go:141] libmachine: (functional-166268) Calling .DriverName
I0103 19:11:39.018939   25424 ssh_runner.go:195] Run: systemctl --version
I0103 19:11:39.018959   25424 main.go:141] libmachine: (functional-166268) Calling .GetSSHHostname
I0103 19:11:39.021450   25424 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.021882   25424 main.go:141] libmachine: (functional-166268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:c2:90", ip: ""} in network mk-functional-166268: {Iface:virbr1 ExpiryTime:2024-01-03 20:07:47 +0000 UTC Type:0 Mac:52:54:00:8f:c2:90 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:functional-166268 Clientid:01:52:54:00:8f:c2:90}
I0103 19:11:39.021904   25424 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined IP address 192.168.50.47 and MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.021935   25424 main.go:141] libmachine: (functional-166268) Calling .GetSSHPort
I0103 19:11:39.022104   25424 main.go:141] libmachine: (functional-166268) Calling .GetSSHKeyPath
I0103 19:11:39.022374   25424 main.go:141] libmachine: (functional-166268) Calling .GetSSHUsername
I0103 19:11:39.022561   25424 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/functional-166268/id_rsa Username:docker}
I0103 19:11:39.114640   25424 ssh_runner.go:195] Run: sudo crictl images --output json
I0103 19:11:39.177425   25424 main.go:141] libmachine: Making call to close driver server
I0103 19:11:39.177442   25424 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:39.177744   25424 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:39.177762   25424 main.go:141] libmachine: Making call to close connection to plugin binary
I0103 19:11:39.177776   25424 main.go:141] libmachine: Making call to close driver server
I0103 19:11:39.177785   25424 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:39.177998   25424 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:39.178014   25424 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-166268 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| localhost/minikube-local-cache-test     | functional-166268  | 973ab58b9ac3f | 3.35kB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-166268  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-166268 image ls --format table --alsologtostderr:
I0103 19:11:39.257777   25515 out.go:296] Setting OutFile to fd 1 ...
I0103 19:11:39.258050   25515 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:39.258077   25515 out.go:309] Setting ErrFile to fd 2...
I0103 19:11:39.258085   25515 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:39.258419   25515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
I0103 19:11:39.259230   25515 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:39.259379   25515 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:39.259772   25515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:39.259811   25515 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:39.273648   25515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
I0103 19:11:39.274080   25515 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:39.274635   25515 main.go:141] libmachine: Using API Version  1
I0103 19:11:39.274662   25515 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:39.274969   25515 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:39.275135   25515 main.go:141] libmachine: (functional-166268) Calling .GetState
I0103 19:11:39.277426   25515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:39.277465   25515 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:39.292071   25515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
I0103 19:11:39.292623   25515 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:39.293191   25515 main.go:141] libmachine: Using API Version  1
I0103 19:11:39.293224   25515 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:39.293612   25515 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:39.293961   25515 main.go:141] libmachine: (functional-166268) Calling .DriverName
I0103 19:11:39.294171   25515 ssh_runner.go:195] Run: systemctl --version
I0103 19:11:39.294206   25515 main.go:141] libmachine: (functional-166268) Calling .GetSSHHostname
I0103 19:11:39.297083   25515 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.297481   25515 main.go:141] libmachine: (functional-166268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:c2:90", ip: ""} in network mk-functional-166268: {Iface:virbr1 ExpiryTime:2024-01-03 20:07:47 +0000 UTC Type:0 Mac:52:54:00:8f:c2:90 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:functional-166268 Clientid:01:52:54:00:8f:c2:90}
I0103 19:11:39.297522   25515 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined IP address 192.168.50.47 and MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.297658   25515 main.go:141] libmachine: (functional-166268) Calling .GetSSHPort
I0103 19:11:39.297863   25515 main.go:141] libmachine: (functional-166268) Calling .GetSSHKeyPath
I0103 19:11:39.298025   25515 main.go:141] libmachine: (functional-166268) Calling .GetSSHUsername
I0103 19:11:39.298156   25515 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/functional-166268/id_rsa Username:docker}
I0103 19:11:39.394228   25515 ssh_runner.go:195] Run: sudo crictl images --output json
I0103 19:11:39.489075   25515 main.go:141] libmachine: Making call to close driver server
I0103 19:11:39.489088   25515 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:39.489369   25515 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:39.489388   25515 main.go:141] libmachine: Making call to close connection to plugin binary
I0103 19:11:39.489416   25515 main.go:141] libmachine: (functional-166268) DBG | Closing plugin on server side
I0103 19:11:39.489420   25515 main.go:141] libmachine: Making call to close driver server
I0103 19:11:39.489481   25515 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:39.489747   25515 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:39.489779   25515 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-166268 image ls --format json --alsologtostderr:
[{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18
cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"973ab58b9ac3f8c0
260e94a3804c261917c3054523ba3c4b23b100f4ac8bc7f5","repoDigests":["localhost/minikube-local-cache-test@sha256:498310b9b83b53282848773c50aac45c18de15948c9d25983f99f26b1a5266e1"],"repoTags":["localhost/minikube-local-cache-test:functional-166268"],"size":"3345"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-166268"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8
s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha
256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06e
d43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube
-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-166268 image ls --format json --alsologtostderr:
I0103 19:11:39.237784   25507 out.go:296] Setting OutFile to fd 1 ...
I0103 19:11:39.237930   25507 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:39.237941   25507 out.go:309] Setting ErrFile to fd 2...
I0103 19:11:39.237947   25507 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:39.238224   25507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
I0103 19:11:39.238984   25507 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:39.239111   25507 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:39.239583   25507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:39.239635   25507 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:39.255239   25507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
I0103 19:11:39.255726   25507 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:39.256326   25507 main.go:141] libmachine: Using API Version  1
I0103 19:11:39.256356   25507 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:39.256752   25507 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:39.256948   25507 main.go:141] libmachine: (functional-166268) Calling .GetState
I0103 19:11:39.258958   25507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:39.259013   25507 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:39.273286   25507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
I0103 19:11:39.273685   25507 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:39.274200   25507 main.go:141] libmachine: Using API Version  1
I0103 19:11:39.274235   25507 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:39.274573   25507 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:39.274773   25507 main.go:141] libmachine: (functional-166268) Calling .DriverName
I0103 19:11:39.274974   25507 ssh_runner.go:195] Run: systemctl --version
I0103 19:11:39.275001   25507 main.go:141] libmachine: (functional-166268) Calling .GetSSHHostname
I0103 19:11:39.278157   25507 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.278532   25507 main.go:141] libmachine: (functional-166268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:c2:90", ip: ""} in network mk-functional-166268: {Iface:virbr1 ExpiryTime:2024-01-03 20:07:47 +0000 UTC Type:0 Mac:52:54:00:8f:c2:90 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:functional-166268 Clientid:01:52:54:00:8f:c2:90}
I0103 19:11:39.278573   25507 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined IP address 192.168.50.47 and MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.278648   25507 main.go:141] libmachine: (functional-166268) Calling .GetSSHPort
I0103 19:11:39.278807   25507 main.go:141] libmachine: (functional-166268) Calling .GetSSHKeyPath
I0103 19:11:39.278926   25507 main.go:141] libmachine: (functional-166268) Calling .GetSSHUsername
I0103 19:11:39.279046   25507 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/functional-166268/id_rsa Username:docker}
I0103 19:11:39.368814   25507 ssh_runner.go:195] Run: sudo crictl images --output json
I0103 19:11:39.422964   25507 main.go:141] libmachine: Making call to close driver server
I0103 19:11:39.422983   25507 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:39.423267   25507 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:39.423288   25507 main.go:141] libmachine: Making call to close connection to plugin binary
I0103 19:11:39.423298   25507 main.go:141] libmachine: (functional-166268) DBG | Closing plugin on server side
I0103 19:11:39.423303   25507 main.go:141] libmachine: Making call to close driver server
I0103 19:11:39.423312   25507 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:39.423591   25507 main.go:141] libmachine: (functional-166268) DBG | Closing plugin on server side
I0103 19:11:39.423591   25507 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:39.423632   25507 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-166268 image ls --format yaml --alsologtostderr:
- id: 973ab58b9ac3f8c0260e94a3804c261917c3054523ba3c4b23b100f4ac8bc7f5
repoDigests:
- localhost/minikube-local-cache-test@sha256:498310b9b83b53282848773c50aac45c18de15948c9d25983f99f26b1a5266e1
repoTags:
- localhost/minikube-local-cache-test:functional-166268
size: "3345"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-166268
size: "34114467"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-166268 image ls --format yaml --alsologtostderr:
I0103 19:11:38.975228   25425 out.go:296] Setting OutFile to fd 1 ...
I0103 19:11:38.975371   25425 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:38.975382   25425 out.go:309] Setting ErrFile to fd 2...
I0103 19:11:38.975388   25425 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:38.975684   25425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
I0103 19:11:38.976445   25425 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:38.976554   25425 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:38.976909   25425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:38.976963   25425 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:38.992798   25425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
I0103 19:11:38.993269   25425 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:38.993848   25425 main.go:141] libmachine: Using API Version  1
I0103 19:11:38.993870   25425 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:38.994316   25425 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:38.994542   25425 main.go:141] libmachine: (functional-166268) Calling .GetState
I0103 19:11:38.996463   25425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:38.996501   25425 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:39.012467   25425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38231
I0103 19:11:39.012872   25425 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:39.013318   25425 main.go:141] libmachine: Using API Version  1
I0103 19:11:39.013342   25425 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:39.013649   25425 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:39.013802   25425 main.go:141] libmachine: (functional-166268) Calling .DriverName
I0103 19:11:39.013979   25425 ssh_runner.go:195] Run: systemctl --version
I0103 19:11:39.014005   25425 main.go:141] libmachine: (functional-166268) Calling .GetSSHHostname
I0103 19:11:39.017405   25425 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.017811   25425 main.go:141] libmachine: (functional-166268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:c2:90", ip: ""} in network mk-functional-166268: {Iface:virbr1 ExpiryTime:2024-01-03 20:07:47 +0000 UTC Type:0 Mac:52:54:00:8f:c2:90 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:functional-166268 Clientid:01:52:54:00:8f:c2:90}
I0103 19:11:39.017848   25425 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined IP address 192.168.50.47 and MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.017978   25425 main.go:141] libmachine: (functional-166268) Calling .GetSSHPort
I0103 19:11:39.018260   25425 main.go:141] libmachine: (functional-166268) Calling .GetSSHKeyPath
I0103 19:11:39.018389   25425 main.go:141] libmachine: (functional-166268) Calling .GetSSHUsername
I0103 19:11:39.018600   25425 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/functional-166268/id_rsa Username:docker}
I0103 19:11:39.108913   25425 ssh_runner.go:195] Run: sudo crictl images --output json
I0103 19:11:39.168293   25425 main.go:141] libmachine: Making call to close driver server
I0103 19:11:39.168310   25425 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:39.168587   25425 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:39.168607   25425 main.go:141] libmachine: Making call to close connection to plugin binary
I0103 19:11:39.168623   25425 main.go:141] libmachine: (functional-166268) DBG | Closing plugin on server side
I0103 19:11:39.168624   25425 main.go:141] libmachine: Making call to close driver server
I0103 19:11:39.168658   25425 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:39.168884   25425 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:39.168907   25425 main.go:141] libmachine: Making call to close connection to plugin binary
I0103 19:11:39.168936   25425 main.go:141] libmachine: (functional-166268) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 ssh pgrep buildkitd: exit status 1 (246.060485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image build -t localhost/my-image:functional-166268 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 image build -t localhost/my-image:functional-166268 testdata/build --alsologtostderr: (3.619268367s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-166268 image build -t localhost/my-image:functional-166268 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6456d22dd44
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-166268
--> ca80dc3e833
Successfully tagged localhost/my-image:functional-166268
ca80dc3e8336f929f6542c6066e596aa8cc027f3e2d48a7d504fbe1657fbad63
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-166268 image build -t localhost/my-image:functional-166268 testdata/build --alsologtostderr:
I0103 19:11:39.231313   25497 out.go:296] Setting OutFile to fd 1 ...
I0103 19:11:39.231586   25497 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:39.231602   25497 out.go:309] Setting ErrFile to fd 2...
I0103 19:11:39.231611   25497 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:11:39.231923   25497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
I0103 19:11:39.232904   25497 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:39.233653   25497 config.go:182] Loaded profile config "functional-166268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:11:39.234222   25497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:39.234299   25497 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:39.250997   25497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42991
I0103 19:11:39.251523   25497 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:39.252071   25497 main.go:141] libmachine: Using API Version  1
I0103 19:11:39.252091   25497 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:39.252476   25497 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:39.252653   25497 main.go:141] libmachine: (functional-166268) Calling .GetState
I0103 19:11:39.254607   25497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0103 19:11:39.254650   25497 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:11:39.269870   25497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
I0103 19:11:39.270374   25497 main.go:141] libmachine: () Calling .GetVersion
I0103 19:11:39.270999   25497 main.go:141] libmachine: Using API Version  1
I0103 19:11:39.271039   25497 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:11:39.271462   25497 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:11:39.271661   25497 main.go:141] libmachine: (functional-166268) Calling .DriverName
I0103 19:11:39.271897   25497 ssh_runner.go:195] Run: systemctl --version
I0103 19:11:39.271935   25497 main.go:141] libmachine: (functional-166268) Calling .GetSSHHostname
I0103 19:11:39.275419   25497 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.276010   25497 main.go:141] libmachine: (functional-166268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:c2:90", ip: ""} in network mk-functional-166268: {Iface:virbr1 ExpiryTime:2024-01-03 20:07:47 +0000 UTC Type:0 Mac:52:54:00:8f:c2:90 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:functional-166268 Clientid:01:52:54:00:8f:c2:90}
I0103 19:11:39.276045   25497 main.go:141] libmachine: (functional-166268) DBG | domain functional-166268 has defined IP address 192.168.50.47 and MAC address 52:54:00:8f:c2:90 in network mk-functional-166268
I0103 19:11:39.276301   25497 main.go:141] libmachine: (functional-166268) Calling .GetSSHPort
I0103 19:11:39.276470   25497 main.go:141] libmachine: (functional-166268) Calling .GetSSHKeyPath
I0103 19:11:39.276631   25497 main.go:141] libmachine: (functional-166268) Calling .GetSSHUsername
I0103 19:11:39.276798   25497 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/functional-166268/id_rsa Username:docker}
I0103 19:11:39.368934   25497 build_images.go:151] Building image from path: /tmp/build.465311754.tar
I0103 19:11:39.368992   25497 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0103 19:11:39.381721   25497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.465311754.tar
I0103 19:11:39.388988   25497 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.465311754.tar: stat -c "%s %y" /var/lib/minikube/build/build.465311754.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.465311754.tar': No such file or directory
I0103 19:11:39.389020   25497 ssh_runner.go:362] scp /tmp/build.465311754.tar --> /var/lib/minikube/build/build.465311754.tar (3072 bytes)
I0103 19:11:39.456231   25497 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.465311754
I0103 19:11:39.476400   25497 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.465311754 -xf /var/lib/minikube/build/build.465311754.tar
I0103 19:11:39.492455   25497 crio.go:297] Building image: /var/lib/minikube/build/build.465311754
I0103 19:11:39.492523   25497 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-166268 /var/lib/minikube/build/build.465311754 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0103 19:11:42.750915   25497 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-166268 /var/lib/minikube/build/build.465311754 --cgroup-manager=cgroupfs: (3.258363344s)
I0103 19:11:42.750974   25497 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.465311754
I0103 19:11:42.760811   25497 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.465311754.tar
I0103 19:11:42.770225   25497 build_images.go:207] Built localhost/my-image:functional-166268 from /tmp/build.465311754.tar
I0103 19:11:42.770259   25497 build_images.go:123] succeeded building to: functional-166268
I0103 19:11:42.770265   25497 build_images.go:124] failed building to: 
I0103 19:11:42.770291   25497 main.go:141] libmachine: Making call to close driver server
I0103 19:11:42.770305   25497 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:42.770583   25497 main.go:141] libmachine: (functional-166268) DBG | Closing plugin on server side
I0103 19:11:42.770674   25497 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:42.770700   25497 main.go:141] libmachine: Making call to close connection to plugin binary
I0103 19:11:42.770718   25497 main.go:141] libmachine: Making call to close driver server
I0103 19:11:42.770731   25497 main.go:141] libmachine: (functional-166268) Calling .Close
I0103 19:11:42.770973   25497 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:11:42.770989   25497 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.98085495s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-166268
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image load --daemon gcr.io/google-containers/addon-resizer:functional-166268 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 image load --daemon gcr.io/google-containers/addon-resizer:functional-166268 --alsologtostderr: (4.335955939s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 service list -o json
functional_test.go:1493: Took "488.862959ms" to run "out/minikube-linux-amd64 -p functional-166268 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.47:31412
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "238.496098ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "66.241667ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "258.379901ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "77.868001ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.47:31412
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdany-port4246050221/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704309077401431583" to /tmp/TestFunctionalparallelMountCmdany-port4246050221/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704309077401431583" to /tmp/TestFunctionalparallelMountCmdany-port4246050221/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704309077401431583" to /tmp/TestFunctionalparallelMountCmdany-port4246050221/001/test-1704309077401431583
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (232.479308ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  3 19:11 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  3 19:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  3 19:11 test-1704309077401431583
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh cat /mount-9p/test-1704309077401431583
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-166268 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ce8b985c-335b-4c96-96c0-5b50bde71bb3] Pending
helpers_test.go:344: "busybox-mount" [ce8b985c-335b-4c96-96c0-5b50bde71bb3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ce8b985c-335b-4c96-96c0-5b50bde71bb3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ce8b985c-335b-4c96-96c0-5b50bde71bb3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005176436s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-166268 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdany-port4246050221/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image load --daemon gcr.io/google-containers/addon-resizer:functional-166268 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 image load --daemon gcr.io/google-containers/addon-resizer:functional-166268 --alsologtostderr: (2.405602364s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.3478553s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-166268
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image load --daemon gcr.io/google-containers/addon-resizer:functional-166268 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 image load --daemon gcr.io/google-containers/addon-resizer:functional-166268 --alsologtostderr: (7.563105248s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdspecific-port330541625/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (302.161503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdspecific-port330541625/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 ssh "sudo umount -f /mount-9p": exit status 1 (333.170003ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-166268 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdspecific-port330541625/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626104370/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626104370/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626104370/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T" /mount1: exit status 1 (298.194254ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-166268 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626104370/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626104370/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-166268 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626104370/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image save gcr.io/google-containers/addon-resizer:functional-166268 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 image save gcr.io/google-containers/addon-resizer:functional-166268 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.146404115s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image rm gcr.io/google-containers/addon-resizer:functional-166268 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
E0103 19:11:36.273043   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.505679273s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image ls
2024/01/03 19:11:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-166268
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-166268 image save --daemon gcr.io/google-containers/addon-resizer:functional-166268 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-166268 image save --daemon gcr.io/google-containers/addon-resizer:functional-166268 --alsologtostderr: (1.271411925s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-166268
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-166268
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-166268
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-166268
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (122.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-736101 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0103 19:12:17.233414   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:13:39.153984   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-736101 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m2.01930733s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (122.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736101 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-736101 addons enable ingress --alsologtostderr -v=5: (17.481796999s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736101 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-522520 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0103 19:17:10.576786   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-522520 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.415572428s)
--- PASS: TestJSONOutput/start/Command (61.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-522520 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-522520 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-522520 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-522520 --output=json --user=testUser: (7.10905454s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-737862 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-737862 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.330731ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d2d3e842-96a6-4a27-b692-4b0caa347135","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-737862] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d288ada6-e065-4778-8616-185d920b7a3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17885"}}
	{"specversion":"1.0","id":"716d982a-083b-47c5-ab89-8484d679b2eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b764b69a-3785-44e9-aeec-76fb55081e07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig"}}
	{"specversion":"1.0","id":"e7a937ba-296e-4aad-9080-65fb96112bb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube"}}
	{"specversion":"1.0","id":"0d95cdc1-1743-4ac0-a81b-ef78825b0231","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6b1f16bc-29fc-43b7-956b-4eeb4b9eac0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fb120fe2-7676-44d3-ae92-9215fbca10fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-737862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-737862
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (91.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-147597 --driver=kvm2  --container-runtime=crio
E0103 19:18:32.499904   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-147597 --driver=kvm2  --container-runtime=crio: (42.386501165s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-150690 --driver=kvm2  --container-runtime=crio
E0103 19:19:07.102723   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:07.108099   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:07.118416   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:07.138714   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:07.179041   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:07.259391   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:07.419697   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:07.740314   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:08.381434   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:09.662179   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:12.223985   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:17.345073   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:19:27.586106   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-150690 --driver=kvm2  --container-runtime=crio: (46.79401441s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-147597
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-150690
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-150690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-150690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-150690: (1.004922529s)
helpers_test.go:175: Cleaning up "first-147597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-147597
--- PASS: TestMinikubeProfile (91.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-932105 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0103 19:19:48.066266   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-932105 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.774046514s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-932105 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-932105 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-946151 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0103 19:20:29.027052   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-946151 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.110063768s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-946151 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-946151 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-932105 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-946151 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-946151 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-946151
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-946151: (1.095095231s)
--- PASS: TestMountStart/serial/Stop (1.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-946151
E0103 19:20:48.654497   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:20:55.308576   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-946151: (21.053407272s)
--- PASS: TestMountStart/serial/RestartStopped (22.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-946151 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-946151 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-484895 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0103 19:21:16.340907   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:21:50.947261   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-484895 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m45.78771549s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-484895 -- rollout status deployment/busybox: (3.740001165s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-lmcnh -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-xlczw -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-lmcnh -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-xlczw -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-lmcnh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-484895 -- exec busybox-5bc68d56bd-xlczw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-484895 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-484895 -v 3 --alsologtostderr: (42.831892503s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-484895 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp testdata/cp-test.txt multinode-484895:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp multinode-484895:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1872919329/001/cp-test_multinode-484895.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp multinode-484895:/home/docker/cp-test.txt multinode-484895-m02:/home/docker/cp-test_multinode-484895_multinode-484895-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m02 "sudo cat /home/docker/cp-test_multinode-484895_multinode-484895-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp multinode-484895:/home/docker/cp-test.txt multinode-484895-m03:/home/docker/cp-test_multinode-484895_multinode-484895-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m03 "sudo cat /home/docker/cp-test_multinode-484895_multinode-484895-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp testdata/cp-test.txt multinode-484895-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp multinode-484895-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1872919329/001/cp-test_multinode-484895-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp multinode-484895-m02:/home/docker/cp-test.txt multinode-484895:/home/docker/cp-test_multinode-484895-m02_multinode-484895.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895 "sudo cat /home/docker/cp-test_multinode-484895-m02_multinode-484895.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp multinode-484895-m02:/home/docker/cp-test.txt multinode-484895-m03:/home/docker/cp-test_multinode-484895-m02_multinode-484895-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m03 "sudo cat /home/docker/cp-test_multinode-484895-m02_multinode-484895-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp testdata/cp-test.txt multinode-484895-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp multinode-484895-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1872919329/001/cp-test_multinode-484895-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp multinode-484895-m03:/home/docker/cp-test.txt multinode-484895:/home/docker/cp-test_multinode-484895-m03_multinode-484895.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895 "sudo cat /home/docker/cp-test_multinode-484895-m03_multinode-484895.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 cp multinode-484895-m03:/home/docker/cp-test.txt multinode-484895-m02:/home/docker/cp-test_multinode-484895-m03_multinode-484895-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 ssh -n multinode-484895-m02 "sudo cat /home/docker/cp-test_multinode-484895-m03_multinode-484895-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-484895 node stop m03: (1.358834163s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-484895 status: exit status 7 (433.941026ms)

                                                
                                                
-- stdout --
	multinode-484895
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-484895-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-484895-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-484895 status --alsologtostderr: exit status 7 (432.250414ms)

                                                
                                                
-- stdout --
	multinode-484895
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-484895-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-484895-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:23:48.129660   32855 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:23:48.129898   32855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:23:48.129907   32855 out.go:309] Setting ErrFile to fd 2...
	I0103 19:23:48.129911   32855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:23:48.130080   32855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:23:48.130255   32855 out.go:303] Setting JSON to false
	I0103 19:23:48.130306   32855 mustload.go:65] Loading cluster: multinode-484895
	I0103 19:23:48.130403   32855 notify.go:220] Checking for updates...
	I0103 19:23:48.130751   32855 config.go:182] Loaded profile config "multinode-484895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:23:48.130770   32855 status.go:255] checking status of multinode-484895 ...
	I0103 19:23:48.131354   32855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:23:48.131435   32855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:23:48.146439   32855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I0103 19:23:48.146920   32855 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:23:48.147481   32855 main.go:141] libmachine: Using API Version  1
	I0103 19:23:48.147509   32855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:23:48.147804   32855 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:23:48.147980   32855 main.go:141] libmachine: (multinode-484895) Calling .GetState
	I0103 19:23:48.149745   32855 status.go:330] multinode-484895 host status = "Running" (err=<nil>)
	I0103 19:23:48.149761   32855 host.go:66] Checking if "multinode-484895" exists ...
	I0103 19:23:48.150057   32855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:23:48.150099   32855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:23:48.165433   32855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44561
	I0103 19:23:48.165880   32855 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:23:48.166310   32855 main.go:141] libmachine: Using API Version  1
	I0103 19:23:48.166337   32855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:23:48.166659   32855 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:23:48.166842   32855 main.go:141] libmachine: (multinode-484895) Calling .GetIP
	I0103 19:23:48.170247   32855 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:23:48.170870   32855 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:23:48.170908   32855 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:23:48.171043   32855 host.go:66] Checking if "multinode-484895" exists ...
	I0103 19:23:48.171338   32855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:23:48.171377   32855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:23:48.186515   32855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I0103 19:23:48.186967   32855 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:23:48.187397   32855 main.go:141] libmachine: Using API Version  1
	I0103 19:23:48.187419   32855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:23:48.187717   32855 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:23:48.187879   32855 main.go:141] libmachine: (multinode-484895) Calling .DriverName
	I0103 19:23:48.188115   32855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:23:48.188139   32855 main.go:141] libmachine: (multinode-484895) Calling .GetSSHHostname
	I0103 19:23:48.191255   32855 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:23:48.191848   32855 main.go:141] libmachine: (multinode-484895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:f0:8c", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:21:15 +0000 UTC Type:0 Mac:52:54:00:28:f0:8c Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-484895 Clientid:01:52:54:00:28:f0:8c}
	I0103 19:23:48.191878   32855 main.go:141] libmachine: (multinode-484895) DBG | domain multinode-484895 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:f0:8c in network mk-multinode-484895
	I0103 19:23:48.192000   32855 main.go:141] libmachine: (multinode-484895) Calling .GetSSHPort
	I0103 19:23:48.192188   32855 main.go:141] libmachine: (multinode-484895) Calling .GetSSHKeyPath
	I0103 19:23:48.192354   32855 main.go:141] libmachine: (multinode-484895) Calling .GetSSHUsername
	I0103 19:23:48.192517   32855 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895/id_rsa Username:docker}
	I0103 19:23:48.274126   32855 ssh_runner.go:195] Run: systemctl --version
	I0103 19:23:48.279505   32855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:23:48.293241   32855 kubeconfig.go:92] found "multinode-484895" server: "https://192.168.39.191:8443"
	I0103 19:23:48.293267   32855 api_server.go:166] Checking apiserver status ...
	I0103 19:23:48.293296   32855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:23:48.305502   32855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1082/cgroup
	I0103 19:23:48.315263   32855 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod2adb5a2561f637a585e38e2b73f2b809/crio-b95bdf953a6043e0c3784d789f5fb39ee212a5c99f8dcef59ac3e65bb422e26f"
	I0103 19:23:48.315319   32855 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2adb5a2561f637a585e38e2b73f2b809/crio-b95bdf953a6043e0c3784d789f5fb39ee212a5c99f8dcef59ac3e65bb422e26f/freezer.state
	I0103 19:23:48.326171   32855 api_server.go:204] freezer state: "THAWED"
	I0103 19:23:48.326202   32855 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I0103 19:23:48.331086   32855 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I0103 19:23:48.331109   32855 status.go:421] multinode-484895 apiserver status = Running (err=<nil>)
	I0103 19:23:48.331118   32855 status.go:257] multinode-484895 status: &{Name:multinode-484895 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0103 19:23:48.331136   32855 status.go:255] checking status of multinode-484895-m02 ...
	I0103 19:23:48.331450   32855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:23:48.331489   32855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:23:48.346062   32855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I0103 19:23:48.346453   32855 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:23:48.346918   32855 main.go:141] libmachine: Using API Version  1
	I0103 19:23:48.346940   32855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:23:48.347237   32855 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:23:48.347447   32855 main.go:141] libmachine: (multinode-484895-m02) Calling .GetState
	I0103 19:23:48.348952   32855 status.go:330] multinode-484895-m02 host status = "Running" (err=<nil>)
	I0103 19:23:48.348970   32855 host.go:66] Checking if "multinode-484895-m02" exists ...
	I0103 19:23:48.349246   32855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:23:48.349286   32855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:23:48.363707   32855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
	I0103 19:23:48.364114   32855 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:23:48.364535   32855 main.go:141] libmachine: Using API Version  1
	I0103 19:23:48.364563   32855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:23:48.364882   32855 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:23:48.365042   32855 main.go:141] libmachine: (multinode-484895-m02) Calling .GetIP
	I0103 19:23:48.367702   32855 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:23:48.368062   32855 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:23:48.368088   32855 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:23:48.368216   32855 host.go:66] Checking if "multinode-484895-m02" exists ...
	I0103 19:23:48.368620   32855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:23:48.368663   32855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:23:48.383263   32855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0103 19:23:48.383736   32855 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:23:48.384258   32855 main.go:141] libmachine: Using API Version  1
	I0103 19:23:48.384283   32855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:23:48.384631   32855 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:23:48.384804   32855 main.go:141] libmachine: (multinode-484895-m02) Calling .DriverName
	I0103 19:23:48.385003   32855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:23:48.385023   32855 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHHostname
	I0103 19:23:48.388309   32855 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:23:48.388984   32855 main.go:141] libmachine: (multinode-484895-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0c:0f", ip: ""} in network mk-multinode-484895: {Iface:virbr1 ExpiryTime:2024-01-03 20:22:20 +0000 UTC Type:0 Mac:52:54:00:b5:0c:0f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-484895-m02 Clientid:01:52:54:00:b5:0c:0f}
	I0103 19:23:48.389030   32855 main.go:141] libmachine: (multinode-484895-m02) DBG | domain multinode-484895-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:b5:0c:0f in network mk-multinode-484895
	I0103 19:23:48.389206   32855 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHPort
	I0103 19:23:48.389394   32855 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHKeyPath
	I0103 19:23:48.389555   32855 main.go:141] libmachine: (multinode-484895-m02) Calling .GetSSHUsername
	I0103 19:23:48.389777   32855 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9609/.minikube/machines/multinode-484895-m02/id_rsa Username:docker}
	I0103 19:23:48.473792   32855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:23:48.486216   32855 status.go:257] multinode-484895-m02 status: &{Name:multinode-484895-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0103 19:23:48.486254   32855 status.go:255] checking status of multinode-484895-m03 ...
	I0103 19:23:48.486703   32855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0103 19:23:48.486750   32855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0103 19:23:48.501186   32855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37791
	I0103 19:23:48.501654   32855 main.go:141] libmachine: () Calling .GetVersion
	I0103 19:23:48.502138   32855 main.go:141] libmachine: Using API Version  1
	I0103 19:23:48.502158   32855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0103 19:23:48.502453   32855 main.go:141] libmachine: () Calling .GetMachineName
	I0103 19:23:48.502696   32855 main.go:141] libmachine: (multinode-484895-m03) Calling .GetState
	I0103 19:23:48.504317   32855 status.go:330] multinode-484895-m03 host status = "Stopped" (err=<nil>)
	I0103 19:23:48.504333   32855 status.go:343] host is not running, skipping remaining checks
	I0103 19:23:48.504341   32855 status.go:257] multinode-484895-m03 status: &{Name:multinode-484895-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 node start m03 --alsologtostderr
E0103 19:24:07.103467   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-484895 node start m03 --alsologtostderr: (29.23964979s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-484895 node delete m03: (1.028957892s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (446.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-484895 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0103 19:39:07.103610   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 19:40:48.653710   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:40:55.307964   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:43:58.356488   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 19:44:07.103688   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-484895 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m25.505209455s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-484895 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (446.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-484895
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-484895-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-484895-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (87.343566ms)

                                                
                                                
-- stdout --
	* [multinode-484895-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-484895-m02' is duplicated with machine name 'multinode-484895-m02' in profile 'multinode-484895'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-484895-m03 --driver=kvm2  --container-runtime=crio
E0103 19:45:48.654087   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:45:55.307966   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-484895-m03 --driver=kvm2  --container-runtime=crio: (46.724670306s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-484895
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-484895: exit status 80 (235.116405ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-484895
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-484895-m03 already exists in multinode-484895-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-484895-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.88s)

                                                
                                    
x
+
TestScheduledStopUnix (114.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-683647 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-683647 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.079156401s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-683647 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-683647 -n scheduled-stop-683647
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-683647 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-683647 --cancel-scheduled
E0103 19:52:10.150640   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-683647 -n scheduled-stop-683647
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-683647
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-683647 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-683647
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-683647: exit status 7 (73.34927ms)

                                                
                                                
-- stdout --
	scheduled-stop-683647
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-683647 -n scheduled-stop-683647
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-683647 -n scheduled-stop-683647: exit status 7 (77.228553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-683647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-683647
--- PASS: TestScheduledStopUnix (114.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (159.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-952735 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-952735 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.542182912s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-952735
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-952735: (2.111601973s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-952735 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-952735 status --format={{.Host}}: exit status 7 (93.409968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-952735 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-952735 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.09962901s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-952735 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-952735 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-952735 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (110.182531ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-952735] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-952735
	    minikube start -p kubernetes-upgrade-952735 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9527352 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-952735 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-952735 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-952735 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.06180493s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-952735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-952735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-952735: (1.337645397s)
--- PASS: TestKubernetesUpgrade (159.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-862548 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-862548 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (104.610779ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-862548] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-862548 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-862548 --driver=kvm2  --container-runtime=crio: (1m36.403585298s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-862548 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-719541 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-719541 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (114.21684ms)

                                                
                                                
-- stdout --
	* [false-719541] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:53:03.914685   40668 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:53:03.914846   40668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:53:03.914857   40668 out.go:309] Setting ErrFile to fd 2...
	I0103 19:53:03.914864   40668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:53:03.915114   40668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9609/.minikube/bin
	I0103 19:53:03.915766   40668 out.go:303] Setting JSON to false
	I0103 19:53:03.916767   40668 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5731,"bootTime":1704305853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:53:03.916834   40668 start.go:138] virtualization: kvm guest
	I0103 19:53:03.919238   40668 out.go:177] * [false-719541] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:53:03.920889   40668 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:53:03.920909   40668 notify.go:220] Checking for updates...
	I0103 19:53:03.923499   40668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:53:03.925190   40668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-9609/kubeconfig
	I0103 19:53:03.926680   40668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9609/.minikube
	I0103 19:53:03.928102   40668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:53:03.929377   40668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:53:03.930992   40668 config.go:182] Loaded profile config "NoKubernetes-862548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:53:03.931113   40668 config.go:182] Loaded profile config "force-systemd-env-892756": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:53:03.931209   40668 config.go:182] Loaded profile config "offline-crio-878561": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:53:03.931305   40668 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:53:03.967235   40668 out.go:177] * Using the kvm2 driver based on user configuration
	I0103 19:53:03.968676   40668 start.go:298] selected driver: kvm2
	I0103 19:53:03.968689   40668 start.go:902] validating driver "kvm2" against <nil>
	I0103 19:53:03.968699   40668 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:53:03.970693   40668 out.go:177] 
	W0103 19:53:03.971904   40668 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0103 19:53:03.973108   40668 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-719541 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-719541" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-719541

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-719541"

                                                
                                                
----------------------- debugLogs end: false-719541 [took: 3.164006605s] --------------------------------
helpers_test.go:175: Cleaning up "false-719541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-719541
--- PASS: TestNetworkPlugins/group/false (3.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-862548 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-862548 --no-kubernetes --driver=kvm2  --container-runtime=crio: (6.307870521s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-862548 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-862548 status -o json: exit status 2 (329.907076ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-862548","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-862548
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-862548: (1.18763209s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (55.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-862548 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-862548 --no-kubernetes --driver=kvm2  --container-runtime=crio: (55.961249616s)
--- PASS: TestNoKubernetes/serial/Start (55.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-862548 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-862548 "sudo systemctl is-active --quiet service kubelet": exit status 1 (239.622148ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-862548
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-862548: (1.263814966s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-862548 --driver=kvm2  --container-runtime=crio
E0103 19:55:48.654082   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 19:55:55.307740   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-862548 --driver=kvm2  --container-runtime=crio: (41.339943313s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-862548 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-862548 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.017733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.68s)

                                                
                                    
x
+
TestPause/serial/Start (69.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-705639 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-705639 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m9.022089989s)
--- PASS: TestPause/serial/Start (69.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (65.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m5.570699701s)
--- PASS: TestNetworkPlugins/group/auto/Start (65.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m24.136911546s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-719541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-719541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4dbsh" [b21ac9fe-69b1-4930-826f-5681762400ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4dbsh" [b21ac9fe-69b1-4930-826f-5681762400ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005187013s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-719541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2s2vq" [ee55c27b-fec1-424a-9777-6b5d29ccd2f7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004548126s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (99.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m39.992133985s)
--- PASS: TestNetworkPlugins/group/calico/Start (99.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-719541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-719541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cdggw" [74a824d5-d2db-4aba-91c0-2937e14e57d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cdggw" [74a824d5-d2db-4aba-91c0-2937e14e57d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.005892798s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (94.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m34.679327287s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (94.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-719541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (120.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0103 20:00:38.357140   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m0.673817796s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (120.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g7kfk" [b20ecaed-f7c2-4a0f-8e63-67a0a1ba68bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006600006s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-719541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-719541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wfdfl" [32bceb50-5b4e-4cf6-aed9-2f382b071230] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wfdfl" [32bceb50-5b4e-4cf6-aed9-2f382b071230] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005569798s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-719541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-719541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z2rsx" [eaf966f4-52ea-45b3-a07b-335cc0479e0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z2rsx" [eaf966f4-52ea-45b3-a07b-335cc0479e0b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.004676046s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-719541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-719541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-857735
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m31.718018383s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (122.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-719541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m2.050783515s)
--- PASS: TestNetworkPlugins/group/bridge/Start (122.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (171.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-927922 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-927922 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m51.071905953s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (171.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-719541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-719541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b56vd" [74b0b252-1c89-495f-9e40-687a4dcf6811] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b56vd" [74b0b252-1c89-495f-9e40-687a4dcf6811] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.00551293s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-719541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (200.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-749210 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-749210 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (3m20.324243823s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (200.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f5mmd" [aac4bc8e-997c-45c4-8b34-4be6f4381b2c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005009045s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-719541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-719541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qnr84" [e2e29aff-295c-4b28-9593-5220d608505a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qnr84" [e2e29aff-295c-4b28-9593-5220d608505a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005329352s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-719541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-719541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (101.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-451331 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-451331 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m41.534643188s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (101.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-719541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r9j9t" [3d28ba78-5741-460e-9988-72d0e3689448] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r9j9t" [3d28ba78-5741-460e-9988-72d0e3689448] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006455324s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-719541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-719541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (105.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-018788 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0103 20:04:41.492451   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:04:48.942556   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:48.947864   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:48.958179   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:48.978508   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:49.018852   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:49.099249   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:49.259721   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:49.580421   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:50.220859   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:51.502045   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:54.062649   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:04:59.183267   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:05:01.973421   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-018788 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m45.232177058s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (105.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-927922 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec52ba5e-d926-4b8f-abb8-0381cf3f985a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0103 20:05:09.423820   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ec52ba5e-d926-4b8f-abb8-0381cf3f985a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 13.004457283s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-927922 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (13.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-927922 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-927922 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.167384923s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-927922 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-451331 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [429c2056-bdb7-4ef4-9e0a-1689542c977e] Pending
helpers_test.go:344: "busybox" [429c2056-bdb7-4ef4-9e0a-1689542c977e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [429c2056-bdb7-4ef4-9e0a-1689542c977e] Running
E0103 20:05:55.308646   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00497421s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-451331 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-451331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-451331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.12398187s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-451331 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-749210 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9a560811-1b16-4bbb-98e8-ceb54e9f8bc8] Pending
helpers_test.go:344: "busybox" [9a560811-1b16-4bbb-98e8-ceb54e9f8bc8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9a560811-1b16-4bbb-98e8-ceb54e9f8bc8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003737836s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-749210 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-018788 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cfdaacfb-b339-488d-968b-537870733563] Pending
helpers_test.go:344: "busybox" [cfdaacfb-b339-488d-968b-537870733563] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cfdaacfb-b339-488d-968b-537870733563] Running
E0103 20:06:30.038637   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:30.043916   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:30.054192   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:30.075138   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:30.115494   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:30.195842   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:30.356488   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:06:30.677056   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004796898s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-018788 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-749210 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0103 20:06:31.318233   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-749210 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-018788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0103 20:06:32.599330   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-018788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.018170576s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-018788 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (399.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-927922 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-927922 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (6m38.77356542s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-927922 -n old-k8s-version-927922
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (399.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (545.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-451331 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0103 20:08:33.811091   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-451331 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m4.757197127s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451331 -n embed-certs-451331
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (545.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (549.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-749210 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-749210 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (9m9.499556737s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-749210 -n no-preload-749210
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (549.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (557.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-018788 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0103 20:09:07.103341   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 20:09:09.451750   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:09.457009   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:09.467303   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:09.487626   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:09.528054   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:09.608480   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:09.768967   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:10.089613   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:10.730615   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:12.011589   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:13.494846   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:09:13.885461   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:09:14.572575   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:19.692745   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:21.012467   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:09:26.400027   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:09:29.933290   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:48.695450   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:09:48.942217   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:09:50.414313   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:09:54.456452   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:10:11.589581   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:10:16.626782   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:10:31.374843   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:10:48.654427   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:10:55.307728   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 20:11:16.377408   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:11:30.038571   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:11:42.554722   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:11:53.295801   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:11:57.726020   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:12:10.240762   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:12:27.748454   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:12:55.429762   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:13:32.532900   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:14:00.218280   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:14:07.102813   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 20:14:09.452720   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:14:21.013014   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-018788 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m17.434603424s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018788 -n default-k8s-diff-port-018788
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (557.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-195281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0103 20:32:27.748264   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-195281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (59.677252075s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-195281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-195281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.493560524s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (328.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-195281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0103 20:35:55.308073   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
E0103 20:36:21.877915   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:21.883225   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:21.893588   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:21.913931   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:21.954295   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:22.034698   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:22.059058   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:22.064318   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:22.074593   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:22.094980   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:22.135336   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:22.195613   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:22.215926   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:22.376370   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:22.516754   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:22.696584   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:23.157441   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:23.336777   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:24.438019   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:24.617580   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:26.998407   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:27.178069   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:28.409860   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.crt: no such file or directory
E0103 20:36:30.038381   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:36:32.119125   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:32.298607   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:42.359356   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:36:42.539119   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:36:42.554461   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:37:02.840267   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:37:03.019956   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:37:24.057724   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:37:27.747870   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:37:43.800784   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:37:43.981157   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:37:50.330882   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.crt: no such file or directory
E0103 20:37:51.988159   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:38:32.532085   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/flannel-719541/client.crt: no such file or directory
E0103 20:38:51.707286   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:39:05.721603   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
E0103 20:39:05.901996   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
E0103 20:39:07.102555   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/ingress-addon-legacy-736101/client.crt: no such file or directory
E0103 20:39:09.451962   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/bridge-719541/client.crt: no such file or directory
E0103 20:39:21.012589   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/auto-719541/client.crt: no such file or directory
E0103 20:39:33.087607   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/calico-719541/client.crt: no such file or directory
E0103 20:39:45.602048   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/custom-flannel-719541/client.crt: no such file or directory
E0103 20:39:48.942291   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/kindnet-719541/client.crt: no such file or directory
E0103 20:40:06.487312   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.crt: no such file or directory
E0103 20:40:30.791994   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/enable-default-cni-719541/client.crt: no such file or directory
E0103 20:40:34.171684   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/old-k8s-version-927922/client.crt: no such file or directory
E0103 20:40:48.654195   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/functional-166268/client.crt: no such file or directory
E0103 20:40:55.307990   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/addons-848866/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-195281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m28.584790718s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-195281 -n newest-cni-195281
E0103 20:41:21.877000   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/no-preload-749210/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (328.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-195281 image list --format=json
E0103 20:41:22.059917   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-9609/.minikube/profiles/default-k8s-diff-port-018788/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-195281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-195281 -n newest-cni-195281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-195281 -n newest-cni-195281: exit status 2 (262.62193ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-195281 -n newest-cni-195281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-195281 -n newest-cni-195281: exit status 2 (261.32883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-195281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-195281 -n newest-cni-195281
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-195281 -n newest-cni-195281
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                    

Test skip (39/300)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
52 TestDockerFlags 0
55 TestDockerEnvContainerd 0
57 TestHyperKitDriverInstallOrUpdate 0
58 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/DockerEnv 0
110 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
158 TestGvisorAddon 0
159 TestImageBuild 0
192 TestKicCustomNetwork 0
193 TestKicExistingNetwork 0
194 TestKicCustomSubnet 0
195 TestKicStaticIP 0
227 TestChangeNoneUser 0
230 TestScheduledStopWindows 0
232 TestSkaffold 0
234 TestInsufficientStorage 0
238 TestMissingContainerUpgrade 0
243 TestNetworkPlugins/group/kubenet 3.37
252 TestNetworkPlugins/group/cilium 3.64
267 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-719541 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-719541" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-719541

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-719541"

                                                
                                                
----------------------- debugLogs end: kubenet-719541 [took: 3.224079823s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-719541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-719541
--- SKIP: TestNetworkPlugins/group/kubenet (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-719541 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-719541" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-719541

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-719541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-719541"

                                                
                                                
----------------------- debugLogs end: cilium-719541 [took: 3.478856386s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-719541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-719541
--- SKIP: TestNetworkPlugins/group/cilium (3.64s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-350596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-350596
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard